Feed aggregator

DrupalEasy: DrupalEasy Podcast 174 - Floss Belt (Ryan Szrama - Drupal Commerce)

Planet Drupal -

Direct .mp3 file download.

Ryan Szrama (rszrama), President and CEO of Commerce Guys, project leader of Drupal Commerce, and proud ex-Best Buy Geek Squad member joins Ryan, Ted, and Mike for a comprehensive discussion Commerce Guys' recent relaunch as a standalone company, and the current development progress of Drupal Commerce for Drupal 8. We also discussed Drupal 8.1, a potential future for the theme layer, the absolute correct pronunciation of "Szrama", and a big announcement from Ted.

Interview DrupalEasy News Three Stories Sponsors Picks of the Week Upcoming Events Follow us on Twitter Five Questions (answers only)
  1. Kayaking
  2. Clash Royale
  3. Become a beverage professional
  4. Llamas
  5. DrupalCon Barcelona 2007
Intro Music

The Dean Scream.

Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Dries Buytaert: Handling context in "outside-in"

Planet Drupal -

In a recent post we talked about how introducing outside-in experiences could improve the Drupal site-building experience by letting you immediately edit simple configuration without leaving the page. In a follow-up blog post, we provided concrete examples of how we can apply outside-in to Drupal.

The feedback was overwhelmingly positive. However, there were also some really important questions raised. The most common concern was the idea that the mockups ignored "context".

When we showed how to place a block "outside-in", we placed it on a single page. However, in Drupal a block can also be made visible for specific pages, types, roles, languages, or any number of other contexts. The flexibility this provides is one place where Drupal shines.

Why context matters

For the sake of simplicity and focus we intentionally did not address how to handle context in outside-in in the last post. However, incorporating context into "outside-in" thinking is fundamentally important for at least two reasons:

  1. Managing context is essential to site building. Site builders commonly want to place a block or menu item that will be visible on not just one but several pages or to not all but some users. A key principle of outside-in is previewing as you edit. The challenge is that you want to preview what site visitors will see, not what you see as a site builder or site administrator.
  2. Managing context is a big usability problem on its own. Even without outside-in patterns, making context simple and usable is an unsolved problem. Modules like Context and Panels have added lots of useful functionality, but all of it happens away from the rendered page.
The ingredients: user groups and page groups

To begin to incorporate context into outside-in, Kevin Oleary, with input from yoroy, Bojhan, Angie Byron, Gábor Hojtsy and others, has iterated on the block placement examples that we presented in the last post, to incorporate some ideas for how we can make context outside-in. We're excited to share our ideas and we'd love your feedback so we can keep iterating.

To solve the problem, we recommend introducing 3 new concepts:

  1. Page groups: re-usable collections of URLs, wildcards, content types, etc.
  2. User groups: reusable collections of roles, user languages, or other user attributes.
  3. Impersonation: the ability to view the page as a user group.
Page groups

Most sites have some concept of a "section" or "type" of page that may or may not equate to a content type. A commerce store for example may have a "kids" section with several product types that share navigation or other blocks. Page groups adapts to this by creating reusable "bundles" of content consisting either of a certain type (e.g. all research reports), or of manually curated lists of pages (e.g. a group that includes /home, /contact us, and /about us), or a combination of the two (similar to Context module but context never provided an in-place UI).

User groups

User groups would combine multiple user contexts like role, language, location, etc. Example user groups could be "Authenticated users logged in from the United States", or "Anonymous users that signed up to our newsletter". The goal is to combine the massive number of potential contexts into understandable "bundles" that can be used for context and impersonation.

Impersonation

As mentioned earlier, a challenge is that you want to preview what site visitors will see, not what you see as a site builder or site administrator. Impersonation allows site builders to switch between different user groups. Switching between different user groups allow a page to be previewed as that type of user.

Using page groups, user groups and impersonation

Let's take a look at how we use these 3 ingredients in an example. For the purpose of this blog post, we want to focus on two use cases:

  1. I'm a site builder working on a life sciences journal with a paywall and I want to place a block called "Download report" next to all entities of type "Research summary" (content type), but only to users with the role "Subscriber" (user role).
  2. I want to place a block called "Access reports" on the main page, the "About us" page, and the "Contact us" page (URL based), and all research summary pages, but only to users who are anonymous users.

Things can get more complex but these two use cases are a good starting point and realistic examples of what people do with Drupal.

Step #1: place a block for anonymous users

Let's assume the user is a content editor, and the user groups "Anonymous" and "Subscriber" as well as the page groups "Subscriber pages" and "Public pages" have already been created for her by a site builder. Her first task is to place the "Access reports" block and make it visible only for anonymous users.


First the editor changes the impersonation to "Anonymous" then she places the block. She is informed about the impact of the change.

Step #2: place a block for subscribers

Our editor's next task is to place the "Download reports" block and make it visible only for subscribers. To do that she is going to want to view the page as a subscriber. Here it's important that this interactions happens smoothly, and with animation, so that changes that occur on the page are not missed.


The editor changes the impersonation to "Subscribers". When she does the "Access reports" block is hidden as it is not visible for subscribers. When she places the "Download report" block and chooses the "Subscriber pages" page group, she is notified about the impact of the change.

Step #3: see if you did it right

Once our editor has finished step one and two she will want to go back and make sure that step two did not undo or complicate what was done in step one, for example by making the "Download report" block visible for Anonymous users or vice versa. This is where impersonation comes in.


The anonymous users need to see the "Access reports" block and subscribers need to see the "Download report" block. Impersonation lets you see what that looks like for each user group.

Summary

The idea of combining a number of contexts into a single object is not new, both context and panels do this. What is new here is that when you bring this to the front-end with impersonation, you can make a change that has broad impact while seeing it exactly as your user will.

Janez Urevc: Media entity reaches 8.x-1.0!

Planet Drupal -

Media entity reaches 8.x-1.0! slashrsm Mon, 02.05.2016 - 10:00

More than two years ago I gave a session about the future of media at DrupalCon Prague. The outcome of that session was a planning sprint that happened two days after it. One of the ideas that was born on that sprint was Media entity, storage layer for media-related information built with simplicity and support for remotely hosted media in mind. It's development started shortly after that and got significantly accelerated in the spring of the next year, when the core of the media initiative met at NYC Camp and agreed on the common battle plan for Drupal 8.

Media entity and it's plugins have been pretty stable for the last few months. It seemed to be almost ready for it's first release, but there were few tickets in the issue queue which I wanted to resolve first. In the last few days I found some time to look at those. Together with Tadej Baša (@paranojik) we managed to finish all of the most important patches, which allowed me to tag 8.x-1.0 yesterday. I am thrilled and extremely proud. A lot of individuals and organizations invested many hours to make this possible and I would like to thank every single one of them. Special thanks go to NYC Camp organizers, who organized two sprints and have been supporting us from the beginning, Examiner.com, my ex employer who allowed me to spend significant amount of my time to work on many media-related modules and MD Systems, who organized two media sprints and let part of their team to work on Drupal 8 media for 3 months.

Along with the main module I released some of it's plugins too: Image, Slideshow, Twitter and Instagram. There are also plugins that handle Video, Audio and Documents, which are also quite ready to be used.

Media entity and it's plugins offer many interesting features:

  • simple and lean storage for local and remote media,
  • out of the box integration with standard Drupal's tools,
  • pluggable architecture that allows easy implementation of additional plugins,
  • 100% automatic delivery of thumbnails,
  • delivery of remote metadata and
  • mapping of remote metadata with entity fields.

I encourage you to try it and let us know what you think. We are looking for co-maintainers too. If you'd like to spend some time in contrib and have ideas for new features let me know.

In the next few weeks we're planning releases of the other media modules. Stay tuned!

Amazee Labs: Amazee Labs launches Drupal Hoster amazee.io

Planet Drupal -

Amazee Labs launches Drupal Hoster amazee.io

Today’s the day to reconsider your hosting. We are launching amazee.io, a state-of-the-art hosting service with an integrated development and hosting environment. Think of a battle-proven system, automated deployments, full congruence between your development and productive environment, and a very competitive pricing.

“Why another Drupal hosting provider?” You might ask. Read why: stories.amazee.io 

Johanna Bergmann Mon, 05/02/2016 - 09:14

And if you have not yet on seen our website or factsheet let me introduce the team behind the system: Michael Schmid (Schnitzel), CTO; Tyler Ward and Bastian Widmer for DevOps, and myself, who after three great years at the Drupal Association accepted the opportunity to lead the new venture as CEO. We are excited!

Hope to see you at the upcoming DrupalCon in New Orleans.

Bangkok, Thailand: CloudFlare’s 79th Data Center

Cloudflare Blog -

CloudFlare just turned up our newest data center in Bangkok, the capital of Thailand and a very popular destination with travelers in Southeast Asia. This expands our network to span 32 cities across Asia, and 79 cities globally.

The floating market at Damnoen Saduak, just outside Bangkok (Photo source: CloudFlare's very own Martin Levy)

Thailand, with a population of 65 million, is the fourth largest country in Southeast Asia. As the central interconnection point for all Internet communications within the country, Bangkok was the natural choice for our newest deployment.

Southeast Asia expansion

Southeast Asia commonly includes the countries of Brunei, Cambodia, East Timor, Indonesia, Laos, Malaysia, Myanmar, Philippines, Singapore, Thailand and Vietnam.

Following Singapore and then Kuala Lumpur, Malaysia, Bangkok is the third location for CloudFlare in the region. We have more deployments in the works in the region; however our next data center beginning with the letter 'B' is roughly 6,000 miles away.

Online, in a massively mobile way

While only 40% of the population is online, Thailand has become a majority-mobile country very quickly, with 70% of its users accessing the Internet predominantly via smartphones. Through CloudFlare’s implementation of encryption using the ChaCha20-Poly1305 cipher suites, mobile users see a better experience with less battery usage for the same content. Add into that our recent release of HTTP/2 Server Push, and we expect that Thailand will feel the difference in performance instantly.

JasTel

We're proud to announce the first of a series of agreements with carriers in Thailand. JasTel Network Company is the partner for our first Bangkok data center. By moving JasTel customers’ access to the CloudFlare network from Singapore and Hong Kong into a local datacenter in Bangkok, we’ve helped build a better Internet experience for their users across the country.

Thailand Internet Networks and their Interconnection



While these diagrams from the National Electronics and Computer Technology Center (NECTEC) may look complex, their progression over the years provide a window into the evolution of Thailand’s domestic and international interconnection. Each Internet network operates its own method of interconnecting with other networks in the country. This isn’t efficient; however, as the video shows, it has been how Thailand has operated.

There is now a new Internet peering point - the Bangkok Neutral Internet Exchange (BKINX). CloudFlare is optimistic that it will improve the country’s interconnection.

The CloudFlare network today (soon to be updated with Bangkok):

- The CloudFlare Team

Red Route: Including SVG icons in a Drupal 8 theme

Planet Drupal -

I got started with task runners a while ago using Grunt, thanks to an excellent 24 ways article by Chris Coyier. Lately I've been using Gulp more, and all the cool kids seem to be going that way, so for the Drupal 8 version of The Gallery Guide, I've decided to use Gulp.

Since hearing Chris Coyier talk about SVG icon systems, I've been meaning to implement them in my own workflow. I've written about a similar setup for Jekyll on the Capgemini engineering blog, and wanted to apply something similar to this site.

The Gulp setup for this project is almost identical to the one described in that post, so I won't go into too much detail here, but in the spirit of openness that's guiding this project, the gulpfile is on Github.

In short, there's a directory of SVG icons within the theme, and there's a Gulp task to combine them into a single file at images/icons.svg. Then the contents of that file is injected into the page using a preprocess function. There's a slight gotcha here - if the value is added directly, without going through the t() function, then it automatically gets escaped to block cross-site scripting. It doesn't seem to make sense, according to the documentation, but I needed to pass the value in without any prefix character:

function gall_preprocess_page(&$variables) { $svg = file_get_contents(drupal_get_path('theme', 'gall') . '/images/icons.svg'); $variables['svg_icons'] = t('svg', array('svg' => $svg)); }

If we were working with user-supplied content, this would be opening up a dangerous attack vector, but given that it's content that I've created myself in the theme, it's safe to trust it.

Having done that, in the page.html.twig template, the variable is available for output:

{{ svg_icons }}

Then these files can be referenced - here's a snippet from region--header.html.twig:

<a href="https://www.twitter.com/thegalleryguide" title="Follow us on Twitter"> <svg class="icon"> <use xlink:href="#twitter"></use> </svg> </a>

Part of me feels like there should be a more Drupal-ish way of doing this, so that the links are part of a menu. But given that this is my own project, and changing the icons would require a code change, it doesn't feel so bad to have them hard-coded in the template.

Tags:  Drupal Drupal 8 The Gallery Guide All tags

DrupalOnWindows: Cheap Pipe (sort of BigPipe) in Drupal 7

Planet Drupal -

hussainweb.me: Drupal Meetup Bangalore – March and April 2016

Planet Drupal -

Things have gotten busy after DrupalCon Asia which meant that the Drupal meetup we hold in Bangalore every month was a little difficult to organize. Srijan Technologies stepped up and offered their office space in Whitefield, Bangalore. They also took care of snacks and even lunch for all the attendees. Kudos to Srijan for organizing the meetup. Thank you!

DrupalEasy: Drupal 6 to Drupal 8(.1.x) Custom Content Migration

Planet Drupal -

Note: This blog post is based on Drupal 8.1.x. It is an updated version of a previous tutorial based on Drupal 8.0.x. While the concepts are largely the same as 8.0.x, a refactoring of the core migrate modules took place in Drupal 8.1.x (migrations will become plugins in 8.1.x). This updated tutorial updates the previous example to work with Drupal 8.1.x, as well as demonstrates how to specify a migration group and run the migration with Drush. If you're familiar with the previous tutorial, you may want to skip to the "Rolling up our sleeves" section below.

Even if you're only casually acquainted with Drupal 8, you probably know that the core upgrade path to Drupal 8 has been completely rewritten from the ground-up, using many of the concepts of the Migrate and Drupal-to-Drupal migration modules. Using the Migrate upgrade module, it is possible to migrate much of a Drupal 6 (or Drupal 7) site to Drupal 8 with a minimum of fuss (DrupalEasy.com is a prime example of this). "Migrate upgrade" is similar to previous Drupal core upgrade paths - there are no options to pick-and-choose what is to be migrated - it's all-or-nothing. This blog post provides an example of how to migrate content from only a single, simple content type in a Drupal 6 site to a Drupal 8.1.x site, without writing any PHP code at all.

Setting the table

First, some background information on how the Drupal 8 Migrate module is architected. The Migrate module revolves around three main concepts:

  • Source plugins - these are plugins that know how to get the particular data to be migrated. Drupal's core "Migrate" module only contains base-level source plugins, often extended by other modules. Most Drupal core modules provide their own source plugins that know how to query Drupal 6 and Drupal 7 databases for data they're responsible for. For example, the Drupal 8 core "Node" module contains source plugins for Drupal 6 and Drupal 7 nodes, node revisions, node types, etc… Additionally, contributed and custom modules can provide additional source plugins for other CMSes (WordPress, Joomla, etc…), database types (Oracle, MSSQL, etc…), and data formats (CSV, XML, JSON, etc.)
  • Process plugins - these are plugins designed to receive data from source plugins, then massage it into the proper form for the destination on a per-field basis. Multiple process plugins can be applied to a single piece of data. Drupal core provides various useful process plugins, but custom and contributed modules can easily implement their own.
  • Destination plugins - these are plugins that know how to receive data from the process plugins and create the appropriate Drupal 8 "thing". The Drupal 8 core "Migrate" module contains general-purpose destination plugins for configuration and content entities, while individual modules can extend that support where their data requires specialized processing.

Together, the Source -> Process -> Destination structure is often called the "pipeline".

It is important to understand that for basic Drupal 6 to Drupal 8 migrations (like this example), all of the code is already present - all the developer needs to do it to configure the migration. It is much like preparing a meal where you already have a kitchen full of tools and food - the chef only needs to assemble what is already there.

The configuration of the migration for this example will take place completely in two custom .yml files that will live inside of a custom module. In the end, the custom module will be quite simple - just a .info.yml file for the module itself, and two .yml files for configuring the migration.

Reviewing the recipe

For this example, the source Drupal 6 site is a large site, with more than 10 different content types, thousands of nodes, and many associated vocabularies, users, profiles, views, and everything else that goes along with an average Drupal site that has been around for 5+ years. The client has decided to rewrite the entire site in Drupal 8, rebuilding virtually the entire site from the ground-up - but they wanted to migrate a few thousand nodes from two particular content types. This example will demonstrate how to write a custom migration for the simpler of the two content types.

The "external article" content type to be migrated contains several fields, but only a few of consequence:

  • Title - the node's title
  • Publication source - a single line, unformatted text field
  • Location - a single line, unformatted text field
  • External link - a field of type "link"

Some additional notes:

  • The "Body" field is unused, and does not need to be migrated.
  • The existing data in the "Author" field is unimportant, and can be populated with UID=1 on the Drupal 8 site.
  • The node will be migrated from type "ext_article" to "article".

Several factors make this a particularly straight-forward migration:

  • There are no reference fields at all (not even the author!)
  • All of the field types to be migrated are included with Drupal 8 core.
  • The Drupal 6 source plugin for nodes allows a "type" parameter, which is super-handy for only migrated nodes of a certain type from the source site.
Rolling up our sleeves

With all of this knowledge, it's time to write our custom migration. First, create a custom module with only an .info.yml file (Drupal Console's generate:module command can do this in a flash.) List the Migrate Drupal (migrate_drupal) and Migrate Plus modules as dependencies. The Migrate Drupal module dependency is necessary for some of its classes that contain functionality to query Drupal 6 databases, while the Migrate Plus module dependency is required because custom migrations are now plugins that utilize the MigrationConfigEntityPluginManager provided by Migrate Plus (full details in a blog post by Mike Ryan).

Next, create a "migration group" by creating a migrate_plus.migration_group.mygroup.yml file. The purpose of a migration group is to be able to group related migrations together, for the benefit of running them all at once as well as providing information common to all the group migrations (like the source database credentials) in one place.

The "shared_configuration -> source -> key" value of "legacy" corresponds to a database specified in the Drupal 8 site's settings.php file. For example:

Next, create a new "migrate_plus.migration.external_articles.yml" file in /config/install/. Copy/paste the contents of Drupal core's /core/modules/node/migration_templates/d6_node.yml file into it. This "migration template" is what all node migrations are based on when running the Drupal core upgrade path. So, it's a great place to start for our custom migration. Note that the file name begins with "migrate_plus.migration" - this is what allows our custom migration to utilize the Migrate Plus module's MigrationConfigEntityPluginManager.

There's a few customizations that need to be made in order to meet our requirements:

  • Change the "id" and "label" of the migration to something unique for the project.
  • Add the "migration_group: mygroup" to add this migration to the group we created above. This allows this migration access to the Drupal 6 source database credentials.
  • For the "source" plugin, the "d6_node" migration is fine - this source knows how to query a Drupal 6 database for nodes. But, by itself, it will query the database for nodes, regardless of their type. Luckily, the "d6_node" plugin takes an (optional) "node_type" parameter. So, we add "ext_article" as the "node_type".
  • We can remove the "nid" and "vid" field mappings in the "process" section. The Drupal core upgrade path preserves source entity ids, but as long as we're careful with reference fields (in our case, we have none), we can remove the field mappings and let Drupal 8 assign new node and version ids for incoming nodes. Note that we're not migrating previous node revisions, only the current revision.
  • Change the "type" field mapping from a straight mapping to a static value using the "default_value" process plugin. This is what allows us to change the type of the incoming nodes from "ext_article" to just "article".
  • Similarly, change the "uid" field mapping from a straight mapping to a static_value of "1", which assigns the author of all incoming nodes to the UID=1 user on the Drupal 8 site.
  • Since we don't have any "body" field data to migrate, we can remove all the "body" field mappings.
  • Add a mapping for the "Publication source". On the Drupal 6 site, this field's machine name is "field_source", on the Drupal 8 site, the field's machine name is field_publication_source. Since it is a simple text field, we can use a direct mapping.
  • Add a direct mapping for "field_location". This one is even easier than the previous because the field name is the same on both the source and destination site.
  • Add a mapping for the source "External link" field. On the Drupal 6 site, the machine name is "field_externallinktarget", while on the Drupal 8 site, it has been changed to "field_external_link". Because this is a field of type "link", we must use the "d6_cck_link" process plugin (provided by the Drupal 8 core "Link" module). This process plugin knows how to take Drupal 6 link field data and massage it into the proper form for Drupal 8 link field data.
  • Finally, we can remove all the migration dependencies, as none of them are necessary for this simple migration.

The resulting file is:

Note that .yml files are super-sensitive to indentation. Each indentation must be two spaces (no tab characters).

Serving the meal

To run the migration, first enable the custom module. The act of enabling the module and Drupal core's reading in of the migration configuration could trigger an error if the configuration isn't formatted properly. For example, if you misspelled the "d6_node" source plugin as "db_node", you'll see the following error:

[Error] The "db_node" plugin does not exist.

If the module installs properly, the Drush commands provided by the Migrate Tools (8.x-2.x-dev - 2016-Apr-12 or later) module can be used to manage the migration. First, use the Drush "migrate-status" command (alias: ms) can be run to confirm that the migration configuration exists. This is similar to functionality in Drupal 7's Migrate module.

~/Sites/drupal8migrate $ drush ms Group: mygroup Status Total Imported Unprocessed Last imported enternal_articles Idle 602 602 0 2016-04-29 16:35:53

Finally, using Drush, the migration can be run using the "migrate-import" (alias: mi) command:

~/Sites/drupal8migrate $ drush mi external_articles Processed 602 items (602 created, 0 updated, 0 failed, 0 ignored) - done with 'external_articles'

Similarly, the migration can be rolled back using the drush "migrate-rollback" (alias: rm) command:

~/Sites/drupal8migrate $ drush migrate-rollback external_articles Rolled back 602 items - done with 'external_articles'

Once the migration is complete, navigate over to your Drupal 8 site, confirm that all the content has been migrated properly, then uninstall the custom module as well as the other migrate-related modules.

Note that the Migrate module doesn't properly dispose of its tables (yet) when it is uninstalled, so you may have to manually remove the "migrate_map" and "migrate_message" tables from your destination database.

Odds and ends
  • One of the trickier aspects about writing custom migrations is updating the migration configuration on an installed module. There are several options:
    • The Configuration development module provides a config-devel-import-one (cdi1) drush command that will read a configuration file directly into the active store. For example: drush cdi1 modules/custom/mymodule/config/install/migrate.migration.external_articles.yml
    • Drush core provides a config-edit command that allows a developer to directly edit an active configuration.
    • Finally, if you're a bit old-school, you can uninstall the module, then use the "drush php" command to run Drupal::configFactory()->getEditable('migrate.migration.external_articles)->delete();, then reinstall the module.
  • Sometimes, while developing a custom migration, if things on the destination get really "dirty", I've found that starting with a fresh DB helps immensely (be sure to remove those "migrate_" tables as well!)
Additional resources

Thanks to Mike Ryan and Jeffrey Phillips for reviewing this post prior to publication.

Lizard Squad Ransom Threats: New Name, Same Faux Armada Collective M.O.

Cloudflare Blog -

CloudFlare recently wrote about the group of cyber criminals claiming to be be the "Armada Collective." In that article, we stressed that this group had not followed through on any of the ransom threats they had made. Quite simply, this copycat group of cyber criminals had not actually carried out a single DDoS attack—they were only trying to make easy money through fear by using the name of the original “Armada Collective” group from late 2015.

Since we published that article earlier this week, this copycat group claiming to be "Armada Collective" has stopped sending ransom threats to website owners. Extorting companies proves to be challenging when the group’s email actively encourages target companies to the search for the phrase “Armada Collective” on Google. The first search result for this phrase now returns CloudFlare’s article outing this group as a fraud.

Beginning late Thursday evening (Pacific Standard Time) several CloudFlare customers began to receive threatening emails from a "new" group calling itself the “Lizard Squad”. These emails have a similar modus operandi to the previous ransom emails. This group was threatening DDoS attacks unless a ransom amount was paid to a Bitcoin address before a deadline. Based on discussions with other security vendors, we can confirm that at least 500 of these emails have been sent out by this group claiming to be the “Lizard Squad.”

Each of these emails is exactly identical, including a Bitcoin address that has been re-used. As we discussed in our previous article, re-using the Bitcoin address means the group of cyber criminals has no way of identify which company has paid their ransom. If this group was legitimate, you’d expect to see a unique Bitcoin address for each individual target company.

Included below is an example email from the "Lizard Squad" compared to the Armada Collective:

While the emails have some differences, they are ultimately identical in their goal and how they go about attempting to extort money from the target companies. Similar to the group claiming to be the "Armada Collective", there is a general consensus within the security community that this group claiming to be the "Lizard Squad" is not in fact actually the group they claim to be. This is another copycat.

Unsurprisingly, we haven’t seen any example of the "Lizard Squad" actually following through on their threats. CloudFlare will continue to monitor the situation, and we’ll provide an update if any further changes develop.

CloudFlare would like to continue to stress the importance of not paying ransom if you receive a threat. Paying the ransom only emboldens these cyber criminals and provides them with funding to attack other companies. If you receive a threat please reach out to CloudFlare, and our team would be happy to discuss whether an attacker is known to carry through on their threats. While the threats made by these imposter groups are unlikely to result in an actual attack, we do encourage companies to use a service like CloudFlare to proactively protect their infrastructure against these types of attacks when there is a legitimate threat.

Only one week left to register for DrupalCon New Orleans

Drupal News -

Get a DrupalCon Ticket

DrupalCon New Orleans is May 9-13th. The schedule is now available on the website, so you can start planning out your week.

See the Schedule

With 130 sessions in 13 tracks across 3 days, 88 Birds of a Feather (BOF) roundtable talks, plus keynotes, sprinting opportunities, an expo hall full of exhibitors, and Monday training sessions and summits galore, there’s no shortage of great things to do at DrupalCon New Orleans.

To view your schedule, just click the “My Schedule” button on the main schedule page. You’ll be able to sort by day, making it easy to see what your plans are every step of the way through the convention.

To add an item to your schedule, open the session information and click the “Add to schedule” button visible under the time and location information. You can also remove items from your schedule either under the specific session page or on your schedule.

To claim a BOF, select the day and time you’d like to hold your BOF and click on ‘Create a BOF’ button in an available room and add in information about what you’ll be discussing. 

We have dozens of time slots to choose from, so the sooner you claim a BOF, the more likely it is that you'll reserve the time and space that's right for you.

See the BOFs

And remember, if you’re attending a training or summit on Monday, it won’t automatically be added to your schedule, so make sure you add it from its unique page on the website.

We hope you enjoy using the scheduler, and look forward to hearing your feedback when we see you at DrupalCon New Orleans. Just as a reminder, if you haven't yet purchased your DrupalCon ticket, do so soon -- regular ticket pricing ends in a little over a week!

Get a DrupalCon Ticket

Front page news: Drupal News

Aten Design Group: Introducing Entity Query API for Drupal 8

Planet Drupal -

Drupal 8 lays the foundation for building robust and flexible RESTful APIs quickly and efficiently. Combine this with Drupal’s best-in-class fieldable entity model and it becomes incredibly easy to construct systems that solve many different problems well.

Out of the box, Drupal 8 comes with core modules for all of the standard RESTful HTTP methods, GET, POST, PATCH, and DELETE. These endpoints are entity specific. Collection endpoints - endpoints that deal with entities in aggregate - are another story. The solution offered is the Views module.

In a headless or API intensive site however, core Drupal 8 and Views are limited by a major shortcoming. Support for querying your entity content over an API is limited to only the custom views that you create. This means that you must first create a view for any content that you want to expose. Filtering is limited to only the filters you explicitly enable and there’s no clear solution for fine-grained control over sorting and paging your results via query strings - the common practice for RESTful APIs. This creates a lot of development overhead for headless and API intensive sites which will inevitably end up with innumerable views.

Creating publicly available APIs would be worse yet. Typically, you would like a public API to allow your consumers to discover and access your data as they see fit. Managing each view for all your entity types becomes increasingly difficult with every new field added or new entity type. This issue makes sense, the Views module’s original intent was to provide prescribed aggregations of your content, possibly modified by a few contextual items like the current path or the current user. Views were never intended to be an all-purpose query tool for the end user.

Enter Entity Query API. Entity Query API allows API consumers to make queries against any entity in Drupal. From users, to nodes, to configuration entities, this is an incredibly powerful tool. By creating a standardized set of parameters for crafting these queries, Entity Query API allows developers to create generalized tooling not tied to particular views or entities. Moreover, they need not worry about creating matching views for every collection of content. This ability to let API consumers craft their own result-set further reinforces the separation of concerns between the client and the server.

Entity Query API does all this by taking advantage of the excellent QueryInterface provided by Drupal Core. The module simply translates your request URI and associated query string into an entity query on the backend, executes it, and returns the results as JSON. By using this, we also get the built in access control that Drupal entity queries provide.

Entity Query API is still in alpha (as of April 2016), but it fully supports everything that you can do with an entity query in code, i.e., conditions, condition groups, sorting, ranges, etc. Like the REST UI module, we have a similar configuration page for enabling queryable entities. We support all core authentication methods as well as JSON Web Token Authentication (another module we’ve built). In future, we’d like to dynamically collect any authentication providers available, just like the REST UI module.

I’m going to be sprinting on Entity Query API at DrupalCon New Orleans on Monday, May 9th 2016 and during the after-DrupalCon sprints on Friday, May 13th 2016. We’d like to add support for other encodings like XML and HAL+JSON (currently the module just supports JSON). Finally, we’d like to add the option to retrieve just entity IDs instead of fully loaded entities.

As always, there’s plenty of work to be done in open source. If you’re interested in Entity Query API, come find me during the sprints or send me a tweet anytime during DrupalCon, my handle is @gabesullice. Of course, the easiest way to help is just to download the module and report any bugs you find. Finally, if you're going to be at DrupalCon New Orleans, stop by the Aten booth, I'd love to hear your ideas and feedback!

DrupalCon News: PM FTW: Need-to-See Sessions & BoFs to blow your minds

Planet Drupal -

I’ll never forget the day that I talked to Angie at DrupalCon London and asked her who the community Project Managers were. There were none. I was floored. How could something so essential, useful & critical to success be overlooked? That was the state of our community as I saw it then: nonexistent. Who were the Project Managers in the Drupal Community? I didn’t know a single one in 2011.

Acquia Developer Center Blog: BigPipe in Drupal: Bigger, Better Performance for Free

Planet Drupal -

Wim Leers, Senior Software Engineer in the Acquia Office of the CTO (aka “OCTO”), has been busy in the last few years making Drupal 8 amazing! His contributions include working with Fabian Franz on aspects of Drupal’s new caching and rendering systems to make Drupal 8 performant. Today’s podcast is a conversation he and I had about who he is and what he’s been up to following our own collaboration preparing my own post on BigPipe.

Below is a transcript of parts of the conversation you can hear in full in the audio and video versions of this podcast. In the audio and video versions, we also touch on:

  • aspects of contribution and the professionalization of contribution in open source, especially in the light of Wim being paid by Acquia to be a full-time contributor to Drupal.
  • how even small contributions, like a well-written bug report, add up to making a big difference ... and my daughter’s commit credit in Drupal 8 :-)
  • Hierarchical Select module
  • Many hands making light work in open source
  • Plus everything below about caching, BigPipe, performance, and more in the transcript!
Learn more about BigPipe in Drupal 8
Interview video - 41 minutes

BigPipe in a nutshell: “What matters in the end is not the number of requests, but how fast it actually feels for the end-user because that's what you care about and that's where BigPipe makes a huge difference." - Wim Leers

Guest dossier
  • Name: Wim Leers
  • Work affiliation: Senior Software Engineer, Acquia Office of the CTO
  • Drupal.org: wim-leers
  • Twitter: @wimleers
  • LinkedIn: Wim Leers
  • GitHub: wimleers
  • Blog/Website: http://wimleers.com/ - "Hello! My name is Wim and I’m interested in WPO, Drupal and data mining. I’ve worked on Facebook’s Site Speed team. And I love llamas."
  • Drupal/FOSS role: Drupal core contributor
  • 1st version of Drupal: Drupal 5 beta

Partial Transcript
How did you discover Drupal?

Wim: I was going to build this website – or I needed to build a website but I was looking for a way that will allow me to set up a website that was maintainable, that didn’t require too much digging around in code, and that looks like it would be a good choice for the long run. I looked at WordPress, at Joomla, at Drupal, and I think a few others maybe, but Drupal stood above the rest like it was the obvious better choice back then, I believe. It was the time of Drupal 5.0 being in active beta. 4.7 was I think the active version. I never used that. I jumped straight to the beta because it looked much better.

jam: I had the joy of installing 4.6 and 4.7. The good old days. Wow. Drupal 5.0 was such a massive leap at that time.

Why did you stick with it now for nine years?

Wim: Yes. I got kind of rolled deeper into the community as I think is the story for many of us. That was 2006 - the end of 2006. It was the Christmas break at my first year of University. I was trying to actually do less work on this Open-source project that I was working on before by building a website so that others could maintain it. So it’s kind of funny that I used Drupal 4 and other Open-source projects. In doing so, I needed a few things to be built myself in order for this website to really function well. So I started working on that and then I noticed - back then it was very easy to get an account that allows you to create a project, a module on Drupal.org. Right now, we have to go through a quite tough review process or back then at least, there was not review process. It was just if they saw you on IRC quite a few times, “Yes, sure, you can get an account.”

jam: The pendulum swings back and forth on that one.

Wim: I know.

jam: We’ve been in a fairly Draconian period recently.

The path to becoming a performance expert

... from doing something Open-sourcey to choosing Drupal because it seemed the best, to then getting annoyed by sites being slow and then looking at how Drupal could be faster for a Bachelor thesis, to them better understanding it through my Master thesis and at Facebook and then eventually, I ended up at Acquia. It’s a path that has definitely been big, mutually influenced by Drupal.

Wim: Yes. I don't know the details. At any case, back then it was very easy. That’s all that matters. That’s why I managed to publish a module very early on and that started to get quite a few users and get more users and I found it interesting that my module that was growing in feature sets and getting more and more users on the hundreds of websites using it. That was so fascinating to me that I kept working on it and improving it more. Then, I got freelance work doing that in the summer so instead of having a crappy student job for the summer, I managed to do freelance work while further developing this module. So that led to more freelance development and that led to my bachelor thesis being about Drupal and CDNs so performance in general, but then a few years later led to my master thesis being about Drupal, not very strictly Drupal but again performance. Performance plus data mining to better understand why a site is slow in certain scenarios. That basically led to Drupal being a significant part or significant presence during my entire period of studying computer science.

jam: What was that first module?

Wim: Hierarchical select.

jam: Picking up at your Master's thesis, talk about your work in performance and where that’s led you.

Wim: Yes. It’s quite interesting to see how Drupal allowed me to do a bachelor’s thesis on Drupal plus CDN to make Drupal faster. Then I wanted to better understand in which scenarios a site could still be slow so for example when accessed from a certain region even while using a CDM or when using a particular browser or maybe a particular piece of JavaScript was slow in a particular browser. Those kind of things, figuring that that that is quite difficult. People complain it’s slow, but they don’t really explain why, or it’s normal, that’s regular, and users if you will – non-technical people will just complain and say it’s broken or slow but will not be able to pinpoint that’s the exact reason.

There can be so many reasons and it can be very difficult to simulate that, to actually see it happen in front of you as a developer. So for my Master’s thesis, I worked on data mining and collecting performance information, performance data. So applying data mining on the performance data allowed me to automatically figure out which situations, which combinations were slow. So then it will allow you to see which exact scenarios are also the things that are most commonly slow. Therefore, our most worth attention from a person fixing them to look into those problems. From that point of view ... And I published that Master thesis and a series of blog posts about that ... Somehow, a person at Facebook discovered that or stumbled upon that and he reached out to me. He was from the – what was called back then the Site Speed Team. I first, I literally didn't believe it that there was a person from Facebook contacting me. I was looking at the email headers to figure out if it was spoofed or something.

jam: Somebody’s pranking you.

Wim: Yes, yes. Exactly. That's what I thought. So it looked like it checked out. I was saying, “Okay. Let’s send a reply, I guess.” Then, two days later, I think I had an informal Skype call with that person and it looked like they were at Facebook office or something, Silicon-Valley-like in his background.

jam: Because you could tell.

Wim: Well, it looked like he at least wasn't in a cellar somewhere pranking me. It was somewhat legit looking. Yes, then I had phone interviews and I think I actually was asked in the beginning even to do a full-time position but I was still studying so I asked if it was possible to do an internship instead and so that’s how I ended up doing an internship there while continuing to work on that same data mining and performance data project piece of software that I started for my Master thesis. They’ve led me to an interesting place so from doing something Open-sourcey to choosing Drupal because it seemed the best, to then getting annoyed by sites being slow and then looking at how Drupal could be faster for a Bachelor thesis, to them better understanding it through my Master thesis and at Facebook and then eventually, I ended up at Acquia. It’s a path that has definitely been big, mutually influenced by Drupal.

On to making Drupal faster

jam: Your biggest contribution to Drupal 8 has also been in the performance area. Would you like to talk about caching and cache tags and BigPipe?

Wim: Sure. I've now been working on Drupal and working at Acquia full-time for about three and a half years, close to four. The first part of that was Spark, so authoring experience. That's a WYSIWYG editing CKEditor, the toolbar to it to some extent – those kinds of things.

jam: You came in during the Spark initiative?

Cache Tags, Render Caching, Cache Invalidation

Wim: Yes. So that was 2012. Then, for the last one and a half to two years probably, I’ve been working pretty much entirely on performance so making Drupal faster. A big portion of what was looking to be a good candidate for making Drupal 8 significantly faster was cache tags. That was a concept that was added a long time ago, I think even in 2011 or so. But it wasn’t really being used in many places because it was only being used in a handful of places across Drupal Core. For example, entities. So nodes, terms, users did not use them at all even though they seemed like a prime candidate. At the same time we had the concept of render caching which is basically we’re rendering something. Render caching allows you to cache the fragment of the page that is being rendered so that you don’t have to do all of the work of getting the data and then turning that into HTML in the theme system, which can take some time. The point was to use render caching in more places, the most expensive places, for example rendering entities such as nodes and users. That actually made for an interesting overlap between render caching and cache tags because when you have rendered something, you want that data to be updated as soon as the data it depends upon is changed. For example, if you change a node title, you want the render cached nodes to be updated. Otherwise, we’re looking at the old thing.

jam: Right. Now, I think that just about every one who is listening to this will know this already. But what you’re actually describing is one of the harder problems in computer science. How do you cache something and how do you find out in a cheap way when that cache should be cleared and you have new data and how do you avoid having stale stuff showing up as much as possible?

Wim: Yes. So the saying in computer science goes, “There are two hard problems in computer science, naming things and caching,” or “cache invalidation” I should say. To be clear, I did not invent cache tags. That was something that very smart people came up a long time before me. I had the ability also because I’m working on Drupal core full time to bring cache tags to many places in Drupal core and so that it’s an inherent part almost of many parts of Drupal core. So I made it sure that for example every single entity, entity type, so whether it’s config entities or nodes or terms, anything that has proper test coverage and whenever those things change, the corresponding cache tag is invalidated which then allows us to have this rendered caches and any other cache to be updated automatically, to be invalidated automatically. Indeed, it’s just a small bit of metadata that is associated with whatever is cached whether it’s rendered or computed or whatever it is. That allows us to very efficiently update those things. Cache tags everywhere makes sure that we can reliably invalidate things and reliably have things update when they should which was an impossible problem to solve in Drupal 7 and before because we didn’t have such a concept.

jam: And “performantly” as well, if that’s a word.

Wim: Yes, that’s a word. Yes.

jam: Without huge cost on my server.

Wim: Yes. There is always some cost because there is something additional that needs to happen. You’d have to retrieve something from the cache then check if the cache tags that are associated with it have been invalidated in the meantime. That’s the additional cost. But it’s a pretty small cost.

jam: That’s a much smaller cost than re-rendering the entire page.

Wim: Yes. Every single time, which is what happened in Drupal 7 before. Actually, to be honest, in most other systems. So most other CMSs on Frameworks or what you quite often see as a solution which is not really a solution is to just assume it’s okay to cache something for five or for ten minutes. But that means that if you as a blogger for example and you fix a typo, your broken title, your wrong title with still that typo in there is going to be there for another 5-10 minutes. So your changes are not showing up right away which is a very annoying, disconcerting experience.

jam: Sure. It’s a kludge. It’s a hack. I mean Cron versus Poor Man’s Cron comes to mind ...

Wim: Yes.

BigPipe, Cache Context, Max Age and the "Dynamic-ness" of things

jam: Yes. So I’m going to imbed two recent webinar videos that you’ve done on this podcast page [See links above!]. If anyone who’s listening to the podcast, in real time we’re speaking in early 2016, we recently did two webinars at Acquia about a thing called BigPipe and BigPipe is essentially the next step in this conversation. I’m going to imbed those videos and when slides are also going to be available and I’m going to link to all of the stuff that we’re talking about. We’ve got this fantastic caching architecture in place and working in Drupal 8. What is BigPipe and tell us about the magic that it does with all of this stuff?

Wim: Yes. First off, actually the two webinars you were mentioning, the first one is actually a subset of the second one so I would recommend to only link to the second one which concludes everything. So then people have one coherent story. That’s probably going to be useful to them. [Check out all the links above in this post!]

jam: All right. Okay. Now, BigPipe.

Wim: Yes. So far we talked about cache tags and rendered caching. But cache tags are not the only bit of cacheability metadata that we have in Drupal 8. We have two more. Those three things together actually allow us to know comprehensively and with complete certainty what things it depends on, what it varies by its own. So cache tags are for declaring dependencies on data, for example on entities so that we know when something has changed. But we also have cache context which allows us to define which context something depends on, what it varies by. For example, if jam has user role A and I have user role B and we have access to different things, then the outputs, the rendered HTML, should also be different. Or maybe it says, “Hi jam” or “Hi Wim,” then the output needs to vary by user. So those kinds of variations are what cache contexts are about. So cache tags and context and then there is a third called “max-age” to describe something that expires after a certain period of time. That’s less commonly necessary, max-age zero means that something is absolutely not cacheable so it needs to be requested or updated every single time. But it’s useful for things like maybe temperature data that is okay to cache for – that remains the same across say one minute or two minutes or 10 minutes.

So those three things together allow us then to know the "dynamicness" of every part of the page. In Drupal usually we have blocks. Most people build Drupal sites using the block system. So when blocks are appearing in different parts of the page, very often, some blocks are personalized. For example, the menu block below will only show menu links that are accessible to the current user. Maybe there is a shopping cart, maybe there is a “Hi, jam. Your friends have just sent you so many messages.” Something like that. So those kinds of things are dynamic. Then, usually there’s also parts of the page--and it’s not limited to blocks by the way but that’s just an easy way to think about it. Usually, you have blocks that are the same across users and usually even across everything. So for example, a menu of the footer or a search form like a search block.

jam: Or the main content.

Wim: The main content, yes. So all of those are actually cacheable across users like if it’s rendered once, we can reuse it for jam, for me, for anybody else. Thanks to those cacheability metadata, so tags, context, and max-age, we know that a given block is going to vary that much. It’s going to be stale at that point when certain entities invalidate it. For example, when jam changes his user name into something like a “Llama” for example.

jam: Just to pick a random word.

BigPipe and Perceived Performance

Wim: Yes. So the fact that we know for any given block what things it depends on, makes us able to know when something is very dynamic and when it’s not. So that allows us to pull out that part of the page, delay rendering it so that we can send the entire page minus the personalized parts first and then send the personalized parts like the “Hi, jam” blog, the shopping carts, those kinds of things, we can send those later. So the difference on this, the perceived performance. How fast a site feels, how fast a site looks and actually just how fast a site shows up. It makes it so that the sites show up instantaneously regardless of user, regardless of the complexity of those dynamic parts of the page because the parts that are the same--which is usually significant parts of a page--those show up immediately. They can be sent right away extremely fast, which means ...

jam: Usually, when I'm browsing that's the stuff that I actually care about. That’s the article I want to read. That’s the photo I want to see because that's the point of the page and that’s what everybody's getting already and that’s usually pre-cached, ready to go.

Wim: Yes, exactly.

jam: Barely or not at all dynamic.

Wim: Yes, exactly. Basically, the crucial parts of the page are usually not personalized and in that case we can make that available so, so much faster because Drupal and just about every other system out there, what they do currently is they render everything and only then once every single detail is rendered, then they send it to the end user. That makes it so that you have to wait even for the stupid, smaller things that are maybe not that important to you. Then you have BigPipe which, because of that metadata, it's just a module you can install; you don't have to configure anything. Thanks to that metadata, it can figure out which parts are too dynamic or are very dynamic or personalized. It can delay the rendering of that. Send the majority of the page first and then send the dynamic parts later. That makes for a much, much faster experience. We're trying to get that into Drupal 8.1 and it looks like many people are happy with that [BigPipe is an experimental core module in Drupal 8.1!]. It will not be enabled by default. It will even be marked as an experimental module at first because we want to make sure that it works in even the most extreme cases. So it's better to first have it experimental so sites can opt into it so we can get more experience and then hopefully in 8.2 we can make it a non-experimental module. That will be a great performance boost with no efforts basically for every site.

jam: So cache tags rendering, cache context, all of that is in Drupal 8 and on always by default and I don’t have to think about it. I’m just benefitting from your work.

Wim: Yes. Yes.

jam: As of the beginning of February 2016, if I want to take advantage of this delivery mechanism which builds on techniques, that’s called the BigPipe module?

Wim: Yes.

jam: And as of Drupal 8.1 or 2 or probably you're moving that into Core as well. [In core as of Drupal 8.1!]

Wim: Yes.

jam: Wow. Exciting.

Wim: Yes. It is very exciting. Actually, this is actually a technique not invented by us. I should say by the way that it was not just me who worked on this. Fabian Franz also from Germany by the way.

Thank you, Fabian! Thank you, Facebook!

jam: Thank you, Fabian.

Wim: Yes, thank you, Fabian. He did a great amount of work. He did the initial pioneering, the initial proof of concept. I worked a lot with him to actually make it happen and get it to a more final state, but he did a lot of the work. Even Fabian didn’t invent this. It’s a technique pioneered by Facebook. They invented or published about this some years ago. I forget the exact ...

jam: No, no, no. When you were an intern there…

Wim: No, no, no.

jam: You used all the documentation and you snuck it on to a photocopier and smuggled it out.

Wim: Then, I probably would be in trouble. No.

jam: Wim Leers, master software spy!

Wim: That’s actually a pretty cool title. I should try to get that to happen. Yes, they pioneered it. The whole point is that currently, or in the classical way of delivering webpages, what happens is first you do a request, then the server does work, you wait, you wait, you wait. You have a blank screen you’re looking at. Then the server sends a bunch of things. Then the client – the browser has to fetch all the CSS, the JavaScript, the images and can only then start rendering. So it's a sequential process and BigPipe allows us to make that a more parallel process where the browser immediately gets a response, not with everything, but with probably the majority if not all of the CSS and JavaScript and images so it can start downloading and rendering that already. Then dynamic parts show up. That’s the reason it’s called BigPipe in the sense that it becomes a bigger pipe along which to send things because things are happening in parallel instead of in sequence.

jam: All right. Wim, this is so interesting. I wrote a small post about this and I did some research into this and every new thing I find out about it, it's just so exciting. It's such a great bit of technology. Thank you Fabian from Tag1 Consulting for all of your work. Fabian Franz. Thank you Wim for your amazing work on this. I am going to link to everything that we’ve been talking about and I’m going to embed the webinar videos where people can learn a lot more about the technical nitty-gritty of all this [see links above!]. Wim, I guess you're working on getting this into Core now, right? That’s pretty much your job right now?

Wim: No, I’m working on other things as well but that is one of the things that I’m going to focus on in the next few weeks. Yes.

jam: Fantastic.

Wim: I’m very happy. I wanted to get this into Drupal ever since I read about it on Facebook’s engineering blog. It’s finally to the point where it already works. You can download it for 8.0 if you’re running Drupal 8 already. It will hopefully be in 8.1. It’s great that Open-source is then able to get this awesome technique which doesn't require any infrastructure which usually is the case for making things faster. You usually need a lot of infrastructure and money and servers. It’s just a more efficient way of delivering HTML and getting the browsers around the stuff. I'm very excited that it’s going to be available in an Open-source project like Drupal. As far as I know, nothing else has something like this so pretty cool.

jam: I'm working through the title for this podcast in my mind . It's got to be something like “Bigger, Better Performance for Free,” right? Actually, the point that you only just touched on now that I hadn’t thought of this morning of course was you don’t need massive parallel server infrastructure and all this stuff to get things really, really cracking. In this case, you get like a ton more bang for your buck out of Drupal now, just with the all of this default stuff that’s…

Wim: Yes, because usually people measure things in terms of requests per second. That is actually going to be identical with BigPipe. The entire duration of a request is going to be the same. It's just that we send useful information much earlier and then continue to send additional things, the dynamic parts later. So if you look at those traditional things to measure which are easy to measure but don't actually give you a good idea of how fast a site is, because what matters in the end is not the number of requests but how fast it actually feels for the end-user because that's what you care about and that's where BigPipe makes a huge difference.

jam: Wim, thank you for taking the time to talk with me this morning. It's been so great. It’s so interesting and thanks for everything that you've been doing. It’s great, and keep up the good work.

Wim: Thank you. Yes. Thanks for having me, and maybe see you next time and have a great day.

Podcast series: Drupal 8Skill Level: BeginnerIntermediateAdvanced

qed42.com: Pune Drupal Group Meetup, April 2016

Planet Drupal -

Pune Drupal Group Meetup, April 2016 Body

The monthly Pune Drupal Group Meetup for April was hosted by QED42. The second PDG meetup to take place in the month of April, You would assume meeting this often would get tiring for other people but not us! We Drupalers love a good catchup session.

The first session was kicked off by Prashant and Rahul, Interns at QED42 and they spoke on, "Our experience with Drupal." They explained about their journey as new comers to Drupal, from the lenses of both CMS and the community. Their confusion at the beginning, the new tech and softwares they have learned, their experience at Drupalcon Asia and their love for the community. A really enjoyable session peppered with ernest observations and cute cat pictures and a brilliant first time attempt. Bravo boys!

 

The second session was taken by Arjun Kumar of QED42 on,"Introduction to CMI." With a brief on CMI and the difference from the features land, he concluded with a demo.

 

After a short discussion on the probable date and location for Pune Drupal Camp we broke off for BOF sessions,with Navneet leading the discussion on Acquia certifications and further discussions on CMI.

With 20 people in attendence we concluded the PDG april meetup with delicious Pahadi Sandwiches in our tummy. Have a great weekend and see you soon!

 

aurelia.bhoy Fri, 04/29/2016 - 20:29

InternetDevels: The Best Drupal eCommerce Websites

Planet Drupal -

Nowadays, lots of companies can benefit from having their own ecommerce sites. It allows brands to sell anything from physical products up to consultations and appointments. In one of our previous blogs, we outlined the main points as to why Drupal is the one stupendous solution for your ecommerce website. Today, we’ll take you on one of the most enthralling journeys and show you a variety of outstanding examples of ecommerce websites that incorporate Drupal. So come aboard!

Read more

Pages

Subscribe to Cruiskeen Consulting LLC aggregator