Feed aggregator

KnackForge: How to update Drupal 8 core?

Planet Drupal -

How to update Drupal 8 core?

Let's see how to update your Drupal site between 8.x.x minor and patch versions. For example, from 8.1.2 to 8.1.3, or from 8.3.5 to 8.4.0. I hope this will help you.

  • If you are upgrading to Drupal version x.y.z

           x -> is known as the major version number

           y -> is known as the minor version number

           z -> is known as the patch version number.

Sat, 03/24/2018 - 10:31

Greg Boggs: Drupal 8 Theming Best Practices

Planet Drupal -

The theming guide for Drupal 8 will get you started in the basics for theming a Drupal site. But, once you’ve learned the basics, what best practices should you be applying to Drupal 8 themes? There are lots of popular methods for writing and organizing CSS. The basics of CSS, of course, apply to Drupal.

  • Don’t get too specific
  • Place your CSS in the header and JavaScript in the footer
  • Organize your CSS well
  • Theme by patterns, don’t go top down
  • Preprocess your styles
Use configuration first

When it comes to Drupal, there are some common mistakes that happen when a front end developer doesn’t know Drupal. In general, apply your classes in configuration. Do not fill your Drupal theme with custom templates like you would for WordPress. Template files, especially with Twig, have their place. But, configuration should be your primary tool. My favoriate tool for applying classes is Display Suite and Block class. Panels is also good. And, fences isn’t terrible.

By applying your classes in configuration allows you to easily edit, reuse, and apply classes across every type of thing in Drupal. If you apply your classes with templates, it’s difficult to apply them across different types of content without cutting and pasting your code. However, don’t be afraid to use some presentational logic in your Twig templates.

Be careful what you target

In Drupal, there are many times where the only selector you have that will work is an ID. If you have this problem, Don’t use it! Never, ever, ever, apply your CSS with ids in Drupal. This is true for every framework, but in Drupal it’s a bigger problem because IDs are not reusable, they are hard to set, and they often change. A backend developer will not be able to apply your styles to other items without fixing your CSS for you. Don’t write CSS that you know a PHP programmer will have to rewrite because you don’t want your PHP programmer writing CSS.

Use view modes

Make sure to target your styles to reusable elements. So, avoid node types because those aren’t reusable on other nodes. Instead, use view modes and configuration to apply selectors as you need them.

Avoid machine names

This relates back to avoiding IDs. Machine names in Drupal sometimes need to change, and they are not reusable. So, don’t target them. Instead use view css, block class and configuration to apply classes to your content.

Don’t get too specific

Drupal 8 markup is better, but Drupal is still very verbose. Don’t get sucked in by Drupal’s divs. Only get as specific as you need to be. Don’t replicate Drupal’s HTML in your CSS.

Apply a grid to Drupal

Choose a grid system that allows you to apply the grid to any markup like Singularity or Neat. While you certainly can Bootstrap Drupal. Using Bootstrap on Drupal requires a heavy rewrite of the HTML which will break contributed modules like Drupal Commerce in obscure, hard to fix ways.

Use a breadcrumb module

Do not write advanced business logic into your theme templates. If your theme is programming complex features, move that code to a module. Or, install modules like Current Page Crumb, or Advanced Agg to handle more complex functions. This isn’t WordPress, your theme shouldn’t ship with a custom version of Panels.

Don’t hard code layouts

Use {{ content }} in your templates and let the View Modes, Display Suite, or Panels handle your layouts.

Don’t use template.php

If you need some ‘glue’, write or find a module meant for it. Drupal is built by many, many tiny modules. Learn to love it because well-written modules are reusable, maintained for free and always improving. Custom code dropped into your theme makes for hard to maintain code.

Drupal Commerce: Enabling Fancy Attributes in Commerce 2.x

Planet Drupal -

In Drupal Commerce 1.x, we used the Commerce Fancy Attributes and Field Extractor modules to render attributes more dynamically than just using simple select lists. This let you do things like show a color swatch instead of just a color name for a customer to select.

Fancy Attributes on a product display in Commerce Kickstart 2.x.

In Commerce 2.0-alpha4, we introduced specific product attribute related entity types. Building on top of them and other contributed modules, we can now provide fancy attributes out of the box! When presenting the attribute dropdown, we show the labels of attribute values. But since attribute values are fieldable, we can just as easily use a different field to represent it, such as an image or a color field. To accomplish this, we provide a new element type that renders the attribute value entities as a radio button option.

Read more to see an example configuration.

Acquia Developer Center Blog: Drupal 8 Module of the Week: Display Suite

Planet Drupal -

Each day, more Drupal 7 modules are being migrated over to Drupal 8 and new ones are being created for the Drupal community’s latest major release. In this series, the Acquia Developer Center is profiling some of the most most prominent, useful modules, projects, and tools available for Drupal 8. This week: Display Suite.

Tags: acquia drupal planetdsdisplay suitedrag and dropuiUX

Nacho Digital: Moving an existing site into Platform.sh

Planet Drupal -

A walk through my first Platform experience with steps on how to avoid problems if you need to upload your local version into it. Lots of hints to get it right the first time!

If you are in a hurry and only need a recipe, please head to the technical part of the article, but I would like to start sharing a bit of my experience first because you might be still deciding if Platform is for you.

I decided to try Platform because a friend of mine needed a site. Do to several reasons I didn't want to host it on my personal server. But I didn't want to run a server for him either. I wanted to forget about maintaining the server, keeping it secure or do upgrades to it.

So I started thinking about options for small sites:

Bluespark Labs: The Art of Sketching before Wireframing. OK, it’s not really Art!

Planet Drupal -

So you’ve got a great idea. Spent months thinking about it. Sold the idea to internally key stakeholders. Grabbed the attention of the right people, organized a team and managed to get funding. You’ve selected your agency and have gone through a discovery process and are ready to design out the idea. Now what?

Well, you start by sketching of course. Yup I said it, we start by drawing pretty pictures (well not so pretty really).

The power of sketching. There’s no need for commitment here.

You may think you don’t need to sketch because you already know how you want the interface to look. But often times when you actually start sketching, you’ll realize that the path that you were so set on, might not work the best.

Sketching sets the tone for the rest of the design process. It ensures you’re creating a user experience that meets both user and stakeholder goals and objectives. Removing this step from the process puts you at a disadvantage as you’re more likely to get locked into a design because it’s more difficult to make quick iterations using software built for wireframing and design comps. Sketching allows you to visualize what an interface could become without committing to anything.

Sketching clutter is a means to an end

Initial sketches will likely uncover that your trying to cram too much onto the user’s screen, but that’s OK. We’re trying to uncover all possibilities so we can iterate quickly.

Having a UX team take an outside-in approach can really help define what you’re trying to achieve without overwhelming the user.

We’ve found that sketching the pages/concepts can be beneficial in a number of ways:

  1. Speeds up the discovery phase by allowing all members of the team to get their thoughts on paper and get buy-in from key stakeholders

  2. Allows the team to iterate quickly on the structure of the site/application without focusing on design elements such as colors, fonts, imagery, etc.

  3. Offers a quick frame of reference to have early implementation discussions with developers on the project

  4. Offers the ability to highlight key areas for measurement to ensure we’re meeting business and project objectives

  5. Offers the ability to test real users with paper prototypes without writing a single line of code

Sketching enables you to work faster & iterate quickly

Start with drawing the high level elements on the page such as the main navigation, secondary navigation, footer elements and high level links. But also try to think about the positioning of elements on the page. Most users read left to right and top to bottom. Keeping that in mind we can guide the user’s eyes to elements on the page by highlighting elements with design characteristics such as color, graphics, etc..

Moving some navigational aids into the secondary nav or the footer doesn’t mean they’re less important, but it does allow us to simplify the interface and add clarity for users to achieve their online goal.

Sketching helps you brainstorm ideas

One of the biggest advantages of sketching is that everyone can do it. From designers to the director of human resources at your company (you don’t have to be an artist). So don’t be afraid to sketch out your ideas.

Sketching is an efficient way to get the ideas out of your head and out in the open for discussion. It keeps you from getting caught up in the technology, and instead focuses you on the best possible solution, freeing you to take risks that you might not otherwise take.

Getting everyone involved in this stage can be incredibly valuable for a couple of reasons. You can quickly get a good grasp of what you’re envisioning while gaining an understanding of the development process and interaction requirements, as you’re guided through the process.

What gets designed on the front end has a back end component that most clients don’t understand. Working with a UX team gives you the opportunity to gain that understanding while contributing with feedback that moves the project forward.

Sketching a UI develops multi-dimensional thinking

Designing a user interface is a process. Translating an idea to meet user requirements requires multi-dimensional thinking. Sketching a user interface is primarily a 2 dimensional process, but as UX professionals we need to consider a number of factors:

  1. What is the user trying to accomplish?

  2. How is the user interacting with the site/application (desktop, mobile, kiosk, device specific, etc.)?

  3. How does the UI react as the user interacts with it?

  4. What appears on each of the pages as content and navigational aids?

  5. What if a user encounters an error? Are there tools to help them recover?

Sketching allows you to visualize the screen-to-screen interaction so that your idea is something that’s visible and clear in user interface form. Ultimately helping you move the project to the next level.

Take your sketches up a notch with interactivity

Lastly, using an online prototyping tool offers the ability to upload the sketches and add hotspots over the navigation and linking aids. This allows you to click through on rough sketches as if it were a real functioning website (a really ugly website). I can’t tell you the number of times I’ve worked on a series of sketches and didn’t realize that I was missing a major element or interaction until i added hotspots and tried to use it.

The design phase beginning with the initial sketches is a way to envision an interface that meets measurable goals. The ultimate goal is to align key business objectives with user goals. When those two things align you’ve got a website or product that’s bound to succeed.

Tags: Drupal PlanetUXUIDesignwireframesrapid iterative

Acquia Developer Center Blog: My Drupal 8 Learning Path: Configuration Management in D8

Planet Drupal -

Recently, fellow Acquian Tanay Sai published a blog with a link to the activity cards he and other members of the Acquia India team have been following to learn Drupal 8.

Each card is a self contained lesson on a single topic related to Drupal 8: with a set objective, steps to complete the learning exercise, links to blogs, documentation, and videos to get more information.

Tags: acquia drupal planet

Jeff Geerling's Blog: Set up a faceted Apache Solr search page on Drupal 8 with Search API Solr and Facets

Planet Drupal -

In Drupal 8, Search API Solr is the consolidated successor to both the Apache Solr Search and Search API Solr modules in Drupal 7. I thought I'd document the process of setting up the module on a Drupal 8 site, connecting to an Apache Solr search server, and configuring a search index and search results page with search facets, since the process has changed slightly from Drupal 7.

Install the Drupal modules

In Drupal 8, since Composer is now a de-facto standard for including external PHP libraries, the Search API Solr module doesn't actually include the Solarium code in the module's repository. So you can't just download the module off Drupal.org, drag it into your codebase, and enable it. You have to first ensure all the module's dependencies are installed via Composer. There are two ways that I recommend for doing this (both are documented in the module's issue: Keep Solarium managed via composer and improve documentation):

Palantir: The Secret Sauce podcast, Ep. 16: Finding Your Purpose as a Drupal Agency

Planet Drupal -

CEO and Founder George DeMet shares a continuation of ideas presented at DrupalCon Barcelona with his new talk on the benefits of running a company according to a set of clearly defined principles, which he's presenting next week at DrupalCon New Orleans. It's called Finding Your Purpose as a Drupal Agency.

iTunes | RSS Feed | Download | Transcript

We'll be back next Tuesday with another episode of the Secret Sauce and a new installment of our long-form interview podcast On the Air With Palantir next month, but for now subscribe to all of our episodes over on iTunes.

Want to learn more? We have built Palantir over the last 20 years with you in mind, and are confident that our approach can enhance everything you have planned for the web.


Allison Manley [AM]: Hi, and welcome to the Secret Sauce by Palantir.net. This is our short podcast that gives quick tips on small things you can do to make your business run better. I’m Allison Manley, an account manager here at Palantir, and today’s advice comes from George DeMet, our Founder and CEO, who as a small business owner knows a thing or two about how to run a company based on clearly defined principles.

George DeMet [GD]: My name is George DeMet, and I’m here today to talk about the benefits of running a company according to a set of clearly defined principles. What follows is taken from a session that I presented last fall at DrupalCon Barcelona on Architecting Companies that are Built to Last.

At the upcoming DrupalCon New Orleans in mid-May, I’ll be continuing this conversation in an all- new session called Finding Your Purpose as a Drupal Agency. If you’re able to attend Drupalcon New Orleans, I hope you’ll check it out.

Some time back I came across an article from the early 1970s about my grandfather, who was also named George DeMet. He was a Greek immigrant who spent more than 60 years running several candy stores, soda fountains, and restaurants in Chicago. While the DeMet’s candy and restaurant business were sold decades ago, the brand survives to this day and you can still buy DeMet’s Turtles in many grocery stores.

I never really got to know my grandfather, who died when I was 7 years old, but I have heard many of the stories that were passed down by my grandmother, my father, and other members of the family.

And from those stories, I’ve gotten a glimpse into some of the principles and values that helped make that business so successful for so long. Simple things, like honesty, being open to new ideas, listening to good ideas from other people, and so forth.

And as I was thinking about those things, I started doing some research into the values that so-called family businesses have in general, and that some of the oldest companies in history have in particular.

The longest lasting company in history was Kongo Gumi, a Japanese Buddhist temple builder that was founded in the year 578 and lasted until 2006. At the time that Kongo Gumi was founded, Europe was in the middle of the dark ages following the fall of the Roman Empire, the prophet Muhammed was just a child, the Mayan Empire was at its peak in Central America, and the Chinese had just invented matches.

At some point in the 18th century the company’s leadership documented a series of principles that were used by succeeding generations to help guide the company.

This included advice that’s still relevant to many companies today, like:

  • Always use common sense
  • Concentrate on your core business
  • Ensure long-term stability for employees
  • Maintain balance between work and family
  • Listen to your customers and treat them with respect
  • Submit the cheapest and most honest estimate
  • Drink only in moderation

Even though the Buddhist temple construction and repair business is a pretty stable one, they still had to contend with a lot of changes over their 1,400 year history. Part of what helped was that they had unusually flexible succession planning; even though the company technically was in the same family for 40 generations, control of the company didn’t automatically go to the eldest son; it went to the person in the family who was deemed the most competent, and sometimes that person was someone who was related by marriage.

Kongo Gumi not only only built temples that were designed to last centuries, but they also built relationships with their customers that lasted for centuries.

In the 20th century, Kongo Gumi branched out into private and commercial construction, which helped compensate for the decline in the temple business. They also didn’t shy away from changes in technology; they were the first in Japan to combine traditional wooden construction with concrete, and the first to use CAD software to design temples.

And while Kongo Gumi’s business had declined as they entered the 21st century, what ultimately did them in were speculative investments that they had made in the 80’s and early 90s in the Japanese real estate bubble.

Even though they were still earning more than $65 million a year in revenue in the mid-2000s, Kongo Gumi was massively over-leveraged and unable to service the more than $343 million in debt they had accumulated since the collapse of the bubble, and they ended up being absorbed by a larger construction firm.

Principles are designed to help answer the question of *how* a company does things, and what criteria they should use to make decisions. In the end, Kongo Gumi was no longer able to survive as an independent entity after 1,400 years in business not because of economic upheaval or changes in technology, but because they strayed from their core principles, stopped taking the long view, and went for the quick cash.

Companies that want to be successful in the long run need to identify their core principles and stick to them, even when doing so means passing up potentially lucrative opportunities in the short term.

Regardless of whether the business involves building Buddhist temples, making chocolate-covered pecans, or building websites, a focus on sustainability over growth encourages companies to put customers and employees first, instead of shareholders and investors. These kinds of companies are uniquely positioned to learn from their failures, build on success, and learn how to thrive in an ever-changing business landscape.

AM: Thank you George! George will be presenting his session, Finding Your Purpose as a Drupal Agency at DrupalCon New Orleans on Wednesday, May 11. You can find out more on our website at palantir.net and in the notes for this particular podcast episode.

If you want to see George’s presentation from DrupalCon Barcelona last year on Architecting Drupal Businesses that are Built to Last, you can also find that link in the notes for this episode as well.

For more great tips, follow us on Twitter at @palantir, or visit our website at palantir.net. Have a great day!

Introducing CloudFlare Origin CA

Cloudflare Blog -

Free and performant encryption to the origin for CloudFlare customers

In the fall of 2014 CloudFlare launched Universal SSL and doubled the number of sites on the Internet accessible via HTTPS. In just a few days we issued certificates protecting millions of our customers’ domains and became the easiest way to secure your website with SSL/TLS.

At the time, we "strongly recommend[ed] that site owners install a certificate on their web servers so we can encrypt traffic to the origin." This recommendation was followed by a blog post describing two readily-available options for doing so—creating a self-signed certificate and purchasing a publicly trusted certificate—and a third, still-in-beta option: using our private CA. Even though out-of-pocket costs of acquiring public CA certificates have since fallen to $0 since that post, we have continued to receive requests from our customers for an even easier (and more performant) option.

Operating a public certificate authority is difficult because you don't directly control either endpoint of the HTTPS connection (browser or web server). As a result, public CAs are limited both in their ability to issue certificates optimized for inter-server communication, as well as in their ability to revoke certificates if they are compromised. Our situation at CloudFlare is markedly different: we affirmatively control the edge of our network so we have the flexibility to build and operate a secure CA that’s capable of issuing highly streamlined certificates and ensuring they are utilized securely.

Less is more: removing the extraneous

With Origin CA, we questioned all aspects of certificate issuance and browser validation, from domain control validation (DCV) to path bundling and revocation checking. We asked ourselves what cruft public CAs would remove from certificates if they only needed to work with one browser, whose codebase they maintained? Questions such as "why bloat certificates with intermediate CAs when they only need to speak with our NGINX-based reverse proxy" and "why force customers to reconfigure their web or name server to pass DCV checks when they’ve already demonstrated control during zone onboarding?" helped shape our efforts.

The result of us asking these questions and removing anything not needed to secure the connection between our servers and yours is described below, along with the benefits you may see and the interfaces you may use. We’re excited to introduce this third option for protecting your origin—more secure than self-signed certificates and more convenient, performant, and cost effective than publicly trusted certificates—and look forward to hearing about all the various ways you may use it.

What are the incremental benefits of Origin CA over public certificates?

1. Ease of issuance and renewal

The most difficult and time-consuming part of securing your origin with TLS is obtaining—and renewing—a certificate. Or many certificates if you’re using a provider that doesn’t support wildcards. Often this process requires intimate knowledge of OpenSSL or related command line tools, a reconfiguration of your web or DNS server to accommodate domain control validation, and a regularly scheduled reminder or cron job to perform this process again every year (or even every few months). With Origin CA, we took the opportunity to remove as many of these obstacles as possible.

Customers more comfortable in the GUI can, with just two clicks, securely generate a private key and wildcard certificate that will be trusted by our systems for anywhere from 7 days to 15 years. And those who prefer more control over the process can use our API or CLI to issue certificates of specified validity, key type, and key size. Regardless of the user interface chosen, the potentially complicated validation process has been replaced by a simple API key now available in your account on the CloudFlare dashboard; we’ve already verified you control your zone, there’s no need to prove it again.

2. Wildcard certificates reduce complexity

If your origin server handles traffic for more than a few hostnames, it can get unwieldy to place a long list of SANs on each certificate request. The alternative—placing just one SAN on each certificate and using Server Name Indication (SNI) extension to lazy load the correct certificate—can easily get out of hand. In this deployment scenario the number of certificates required is a non-trivial fraction of the the number of hostnames you wish to protect (not to mention you may not even be allowed to do so on shared hosts).

Beyond provisioning efforts, placing too many SANs on a single certificate can significantly increase the size of the certificate. Larger certificates consume more bandwidth at your origin (which, unlike CloudFlare, may bill you for marginal bandwidth consumption). The obvious answer for protecting more than a few hostnames (or even domains) on your origin is to use wildcard certificates. With Origin CA, you can request a single certificate containing wildcards for any and all of the zones registered in your account; you can even add wildcards covering multiple levels of a domain, e.g., *.example.com, *.secure.example.com, and *.another-example.com can all co-exist on the same certificate.

3. Speed and simplicity of revocation

If you’ve ever tried revoking a publicly trusted certificate—or relying on browsers to distrust a revoked certificate—you know how unreliable the process can be. Take your pick of explanations from Google crypto-wunderkind Adam Langley—Revocation doesn’t work (2011); No, don’t enable revocation checking (2014); or Revocation still doesn’t work (2014)—they all reach the same conclusion: browser-based revocation checking is broken and useless without hard fails. (The advent of the OCSP Must-Staple extension should improve the situation, but if history is any indication it will be quite a while before sufficient browsers, certificate authorities, and issued certificates support it.)

Fortunately with Origin CA, we only need one "browser" to respect revocation: our edge. Failing hard—the only acceptable way to fail from a security perspective—is incredibly simple (and fast) when your user agent has an in-memory list of all valid certificates. (Try doing that with the millions of certificates issued by dozens of CAs over the past few years!) Rather than fire requests across the public Internet and praying they return quickly enough, our NGINX instances can just query a local database for each new HTTPS session and confirm in microseconds whether you’ve revoked the certificate for any reason.

And if/when the time comes to revoke, a single click and confirmation is all that’s required for our edge to distrust the certificate. If you expose or misplace your private key (and can’t find it under the couch or in the pockets of yesterday’s pants), simply navigate to the Crypto tab and click the "x" icon next to the compromised certificate. Within seconds we’ll push this revocation status worldwide and our edge servers will refuse to communicate with origins serving the revoked certificate. Such speed, security, and reliability is impossible without total control over the CA and browser ecosystem.

Even if published to the world, to abuse your CloudFlare Origin CA certificate an attacker would either need to compromise your CloudFlare account or take control of your registrar or DNS provider account. In any case, you'd have a lot more to worry about than just a compromised certificate.

4. Widely supported install base

While most web server operators will elect to download the default PEM format for their certificate (as expected by Apache httpd and NGINX), many others will require a different variation. As illustrated in the screenshots below, certificates can be downloaded in several different formats, including DER, the binary equivalent of PEM’s ASCII, and PKCS#7, the Microsoft IIS Server and Apache Tomcat-compatible choice. Simply pick what works for your server and download it; there’s no need to learn cryptic command-line methods for converting.

Besides additional formats, we also have instructions for a wide variety of operating systems and web servers. After generating your certificate simply select your desired destination from the 80+ different options and instructions specific to your environment will be displayed.

5. Optimized certificates increase performance and reduce origin bandwidth consumption

Before a user agent can securely transmit HTTP actions like GET and POST to a web server, it must first establish the TLS session. To do so, the client kicks off a handshake by sending its own capabilities and the server responds with the negotiated cipher suite and certificate that should be used to protect the conversation agreeing upon a symmetric key.

Until the client receives—and validates—the certificate(s) sent across the network by the origin, it is unable to complete the handshake and commence with requesting data. As such, it’s in the best interest of performance-sensitive users to make this certificate as small and unencumbered as possible. (Ideally the packet containing the certificate even fits within a single frame and is not fragmented during transmission.) One brief survey by a member of the CA/B Forum found that while the majority of certificates were between 1-2 kB, several thousand were found in the wild at least 10 kB in size, with one even over 50(!) kB.

With Origin CA certificates, we’ve stripped everything that’s extraneous to communication between our servers and yours to produce the smallest possible certificate and handshake. For example, we have no need to bundle intermediate certificates to assist browsers in building paths to trusted roots; no need to include signed certificate timestamps (SCTs) for purposes of certificate transparency and EV treatment; no need to include links to Certification Practice Statements or other URLs; and no need to listen to Online Certificate Status Protocol (OCSP) responses from third-parties.

Eliminating these unnecessary components typically found in certificates from public CAs has allowed us to reduce the size of the handshake by up to 70%, from over 4500 bytes in some cases to under 1300. You can replicate this test yourself using the following two OpenSSL s_client calls:

$ for host in google.com bing.com cloudflare.com patriots.com; do echo -en "$host\t" && openssl s_client -connect www.$host:443 -servername www.$host 2>/dev/null </dev/null | grep "has read"; done google.com SSL handshake has read 3747 bytes and written 497 bytes bing.com SSL handshake has read 4252 bytes and written 583 bytes cloudflare.com SSL handshake has read 3845 bytes and written 501 bytes patriots.com SSL handshake has read 4507 bytes and written 499 bytes $ openssl s_client -connect [ORIGIN_IP]:443 -servername [ORIGIN_HOSTNAME] 2>/dev/null </dev/null | grep "has read" SSL handshake has read 1289 bytes and written 496 bytes

How can Origin CA be used?

Now that you've learned why you may want to use Origin CA-issued certificates on your origin, let’s explore the various interfaces you have for issuing (and revoking) them. Effective immediately, certificates for your origin can be issued in an automated fashion through our API or CLI, or through a guided wizard in the Crypto app on the CloudFlare Dashboard. Each of the three available methods is described below along with examples:

1. GUI: Crypto app in the CloudFlare Dashboard

To get started, login to the dashboard and click on the Crypto icon.

Next, scroll down to the Origin Certificates card and click the "Create Certificate" button.

At this point, you must decide whether you wish to provide your own certificate signing request (CSR) or let your browser create one using its cryptographic libraries.

If you’ve already generated a private key and CSR, select the radio button labeled "I have my own CSR" and paste the CSR contents into the textbox that appears. Alternatively, leave the default option of “Let CloudFlare generate a key pair using my browser” selected. (If you are accessing the dashboard using an older browser, i.e., one that does not support Web Crypto API, you will only see one option.)

By default, the list of hostnames your origin certificate covers includes the apex of the currently selected zone (example.com) and one level of wildcard (*.example.com). If you’d like to protect additional hostnames outside this list (e.g., *.secure.example.com), simply type them in the input box at the bottom of the modal. You may add up to 100 hostnames (or wildcards) on a single certificate, spanning multiple zones, provided they are for zones active in your account. Please keep in mind that each additional SAN increases the size of your certificate, so applications requiring frequent updates to the origin (and thus TLS handshakes) you should use as few SANs as possible.

After entering additional hostnames (or confirming the defaults will suffice), click Next and your certificate will be generated. By default, it will be displayed in PEM format, which is what web servers such as Apache httpd and NGINX expect. If you require an alternative format, such as PKCS#7 for Microsoft’s IIS or Apache Tomcat, change the dropdown box appropriately and save the contents to your server. Or if you’d like to use a binary format such as DER, select that and click the download button.

Lastly, choose your web server from the select box at the bottom of the window and click "Show instructions". Doing so will open installation instructions in a new browser window that, once followed, will allow you to adjust your SSL setting (at the top of the Crypto app) to "Strict" mode.

2. API: Application Programming Interface

If you would like to automate the process of issuing certificates for your origin, or require more control over the parameters (e.g., shorter-lived certificates, greater key size, etc.), you can make use of our certificates API endpoint.

Before calling the API you will first need to generate a certificate signing request. To do so, you can use cfssl or OpenSSL. Instructions for the former can be found here, while our friends at DigiCert have provided an easy-to-use wizard for the latter. Don't worry about including subject alternative names (SANs) in the CSR — we’ll take those in separately through the JSON payload.

i. Generate a private key and CSR $ openssl req -new -newkey rsa:2048 -nodes -out www_example_com.csr -keyout www_example_com.key -subj "/C=US/ST=California/L=/O=/CN=www.example.com" Generating a 2048 bit RSA private key .............................+++ .....................................................................................................................................+++ writing new private key to 'www_example_com.key' ii. Obtain your Certificate API token

Log in to the CloudFlare dashboard, and click My Settings under your username in the top-right  hand corner. Scroll down to the API Key panel and click "View API Key".

Click within the window that pops up and then copy the contents that is auto-selected to your clipboard. Save the key to an environment variable as it will be used in the subsequent curl command.

$   export CF_API_KEY="v1.0-c9..." iii. Make the API call

Note that you will need to remove newlines from the CSR as newlines are not permitted in JSON and add them back into the certificate that’s returned. When you remove them you should replace the newline character (hex code 0A) with the string "\n".

$ curl -sX POST https://api.cloudflare.com/client/v4/certificates/ \ -H "Content-Type: application/json" \ -H "Accept: application/json" \ -H "X-AUTH-USER-SERVICE-KEY: $CF_API_KEY" \ --data '{"hostnames":["example.com","*.example.com"],"requested_validity":"365","request_type":"origin-rsa","csr":"-----BEGIN CERTIFICATE REQUEST-----...-----END CERTIFICATE REQUEST-----"}' iv. Extract certificate from API response {  "success": true, "errors": [], "messages": [],  "result": {    "id": "0x47530d8f561faa08",    "certificate": "-----BEGIN CERTIFICATE-----...-----END CERTIFICATE-----",    "hostnames": [ "example.com", "*.example.com" ],    "expires_on": "2014-01-01T05:20:00.12345Z",    "request_type": "origin-rsa", "requested_validity": "365",    "csr": "-----BEGIN CERTIFICATE REQUEST-----...-----END CERTIFICATE REQUEST-----"  } } 3. CLI: Command Line Interface (Linux only)

Lastly, if you’re using a Linux platform that supports RPM or DEB packages, the easiest way to request certificates is with the Origin CA CLI. Before proceeding, make sure you follow the instructions above in the API example to save your API key in the CF_API_KEY environment variable.

To get started, browse to https://pkg.cloudflare.com/ and follow the instructions to add the CloudFlare Package Repository to your system. With the new package server in place, use apt or yum to install the "cfca" package (binary placed in /usr/local/bin on DEbian, e.g.,) and then issue a certificate in a manner similar to the following examples:

i. Default parameters (RSA key, 15 year validity) $ /usr/local/bin/cfca getcert -hostnames example.com,*.example.com 2016/04/25 19:47:23 [INFO] generate received request 2016/04/25 19:47:23 [INFO] received CSR 2016/04/25 19:47:23 [INFO] generating key: rsa-2048 2016/04/25 19:47:24 [INFO] encoded CSR 2016/04/25 19:47:24 [INFO] Connecting to https://api.cloudflare.com/client/v4/certificates/ 2016/04/25 19:47:25 [INFO] Successfully issued certificate for: *.example.com example.com 2016/04/25 19:47:25 [INFO] Saved generated private key to wildcard.example.com.key 2016/04/25 19:47:25 [INFO] Saved issued certificate to wildcard.example.com.pem ii. Short-lived multi-zone ECC example (ECDSA key, 7 day validity) $ /usr/local/bin/cfca getcert -days 7 -hostnames *.foo.com,*.bar.com -key-type ecdsa -key-size 256 2016/04/26 12:24:24 [INFO] generate received request 2016/04/26 12:24:24 [INFO] received CSR 2016/04/26 12:24:24 [INFO] generating key: ecdsa-256 2016/04/26 12:24:24 [INFO] encoded CSR 2016/04/26 12:24:24 [INFO] Connecting to https://api.cloudflare.com/client/v4/certificates/ 2016/04/26 12:24:28 [INFO] Successfully issued certificate for: *.foo.com *.bar.com 2016/04/26 12:24:28 [INFO] Saved generated private key to wildcard.foo.com.key 2016/04/26 12:24:28 [INFO] Saved issued certificate to wildcard.foo.com.pem $ cat wildcard.foo.com.pem | openssl x509 -noout -text | grep "Not After\|Public Key Algorithm\|DNS" Not After : May  3 19:19:00 2016 GMT Public Key Algorithm: id-ecPublicKey DNS:*.foo.com, DNS:*.bar.com

Final Recommendations & Feedback

As you use Origin CA to generate certificates and secure your origin servers, keep these recommendations in mind:

  1. Use Origin CA to generate certificates for your origin servers, using wildcards for each zone to keep SANs to a minimum.
  2. Upgrade each zone’s SSL setting from "Flexible" or "Full" to "Strict" mode you have Origin CA or public CA certificates installed protecting all hostnames in that zone.
  3. If you were previously enrolled in our beta Origin CA program, you should issue new certificates so they include revocation endpoints.
  4. When pausing CloudFlare or gray-clouding individual zones, be aware that you and your visitors may receive errors in their browsers until you orange-cloud (reverse proxy) them again.

If you encounter any difficulty issuing certificates, or have any other concerns or suggestions, please open a support ticket and we will be happy to assist you.

OSTraining: Improve The Drupal 8 Admin Menu for Content Creators

Planet Drupal -

The Drupal admin interface needs to keep a lot of people happy. The admin interface is often used by everyone from very experienced users to complete beginners.

One of our members asked if it was possible to create a custom menu for their content creators. They wanted one single place for Drupal beginners to find all the links they needed.

In this tutorial, we'll show you how to do that and also create a faster, more usable admin menu.

Red Route: How I learned to stop worrying and love custom migration classes

Planet Drupal -

When I got sick of banging my head against the migration-shaped wall the other day, the state of my attempts to migrate content from Drupal 6 was fairly limited.

Migrate Upgrade was working fairly well, up to a point.

Gallery nodes had been migrated, but without their addresses, which is hardly surprising, seeing as the D6 site was using the Location module, and I've decided to go with Geolocation and Address for the new site.

Exhibition nodes had been migrated, but without the node references to galleries. There's an issue with a patch on drupal.org for this, but after applying the patch, files weren't being migrated.

Time to get stuck in and help fix the patch, I thought. But the trouble is that we're dealing with a moving target. With the release of Drupal 8.1.0, the various migrate modules are all changing as well, and core patches shouldn't be against the 8.0.x branch anymore. It's all too easy to imagine that updating to the latest versions of things will solve all your problems. But often you just end up with a whole new set of problems, and it's hard to figure out where the problem is, in among so much change.

Luckily, by the time I'd done a bit of fiddling with the theme, somebody else had made some progress on the entity reference migration patch, so when I revisited the migrations, having applied the new version of the patch, the exhibitions were being connected to the galleries correctly.

One problem I faced was that the migration would often fail with the error MySQL has gone away - with some help from drupal.org I learned that this wouldn't be so bad if the tables use InnoDB. Converting one of the suggestions into a quick script to update all the Drupal 6 tables really helped, although copying the my.cnf settings killed my MySQL completely for some reason. Yet another reminder to keep backups when you're changing things.

Having read some tutorials, and done some migrations in Drupal 6 and 7, I was trying to tweak the data inside the prepareRow method in my custom migration class. The thing I didn't get for ages was that this method is provided by the Migrate Plus module, so not only did the module have to be enabled, but the migration definition yml file names needed to start with migrate_plus.migration rather than the migrate.migration.

Once I'd made that change, the prepareRow method fired as expected, and from there it was relatively straightforward to get the values out of the old database, even in the more complex migrations like getting location data from another table and splitting it into two fields.

As an example, here's the code of the prepareRow method in the GalleryNode migration class:

/** * [email protected]} */ public function prepareRow(Row $row) { if (parent::prepareRow($row) === FALSE) { return FALSE; } // Make sure that URLs have a protocol. $website = $row->getSourceProperty('field_website'); if (!empty($website)) { $url = $website[0]['url']; $website[0]['url'] = _gallerymigrations_website_protocol($url); $row->setSourceProperty('field_website', $website); } // Get the location data from the D6 database. $nid = $row->getSourceProperty('nid'); $location = $this->getLocation($nid); // Set up latitude and longitude for use with geolocation module. $geolocation = $this->prepareGeoLocation($location->latitude, $location->longitude); $row->setSourceProperty('field_location', $geolocation); $address = $this->prepareAddress($location); $row->setSourceProperty('field_address', $address); return parent::prepareRow($row); }

The methods called by this are all fairly similar, with a switch to the D6 database followed by a query - here's an example:

/** * Get the location for this node from the D6 database. * * @param int $nid * The node ID of the gallery. * * @return Object * The database row for the location. */ protected function getLocation($nid) { // Switch connection to access the D6 database. \Drupal\Core\Database\Database::setActiveConnection('d6'); $db = \Drupal\Core\Database\Database::getConnection(); $query = $db->select('location_instance', 'li'); $query->join('location', 'l', 'l.lid = li.lid'); $query->condition('nid', $nid); $query->fields('l', array( 'name', 'street', 'additional', 'city', 'province', 'postal_code', 'country', 'latitude', 'longitude', 'source', )); $result = $query->execute(); // Revert to the default database connection. \Drupal\Core\Database\Database::setActiveConnection(); $data = array(); foreach ($result as $row) { $data[] = $row; } // There should be only one row, so return that. return $data[0]; }

I get the feeling that if I was following the "proper" object-oriented approach, I'd be doing this using a process plugin, as suggested by this tutorial from Advomatic. But this does the job, and the code doesn't feel all that dirty.

Another lesson I learned the hard way is that when you're adding fields from other sources inside the prepareRow method, you also need to remember to add those fields into the .yml file.

Feeling pleased with myself that I'd managed to migrate the location data, I decided to jump down the rabbit hole of working on integration between Geolocation and Address modules, even though I'd already said I didn't need to do it. Why do developers do that? I can see how difficult a project manager's job can be sometimes. Thankfully, the integration (at least for the needs of this site) can be a fairly shallow and simple job with a few lines of JavaScript, so I've put a patch up for review.

In my day job, I'm a great believer in breaking tasks down as far as possible so that code can be reviewed in small branches and small commits. But when you're working on your own project, it's easy to jump around from task to task as the mood takes you. You can't be bothered with creating branches for every ticket - after all, who's going to review your code?. Half the time, you can't even be bothered creating tickets - you're the product owner, and the backlog is largely in your head.

That butterfly tendency, plus the number of patches I'm applying to core and contributed modules, means that my local site has far more uncommitted change than I'd normally be comfortable with. Using git change lists in PhpStorm has really helped me to cope with the chaos.

On the subject of patches, I've finally got round to trying out Dave Reid's patch tool - it's working really well so far.

This process has reinforced in my mind the value of testing things like migrations on a small sample set. Thankfully, the Drupal 6 version of the Admin Views module lets you bulk delete nodes and taxonomy terms - I couldn't face tweaking the migration while running multiple iterations of importing 3828 terms.

Which reminds me, xdebug is great, but remember to disable it after you've finished with it, otherwise using the site in your VM will be slow, and as Joel Spolsky says, when things run slowly, you get distracted and your productivity suffers. Humans are not good at multitasking, especially when those tasks are complex and unfamiliar.

And when we try to multitask, we don't think straight. I've just spent an hour debugging something that should just work, because the logic in my taxonomy term source plugin was based on a piece of confusion that now seems obvious and stupid. For reference, in Drupal 6, the 'term_node' table connects nodes with the taxonomy terms they're tagged with, and vid refers to the node revision ID, whereas the 'taxonomynode' table connects terms with their related taxonomy node, and vid refers to the vocabulary ID.

The bad news is that the mappings from nodes to taxonomy terms aren't being migrated properly - for some strange reason they're being registered correctly, but all the rows are being ignored.

The good news is that the work in progress is now online for the world to see. For one thing, it's easier to do cross-browser testing that way, rather than faffing around with virtual machines and proxy tunnels and all that sort of nonsense.

So please, have a look, and if you spot any bugs, let me know by creating an issue on the project board.

Tags:  Drupal Drupal 8 The Gallery Guide All tags

Drupal Console: Drupal Console by the numbers

Planet Drupal -

In this blog post, we will explore some interesting numbers related to the development of this project, between the first commit at Aug 28, 2013, and the day of writing this blog post, May 3, 2016. Keep reading to find how much time had been invested in development, how many tagged released we have done, the number of awesome contributors and number of downloads for this project between others.

Commerce Guys: Sprint with us on Commerce 2.x at DrupalCon New Orleans

Planet Drupal -

Three months ago Commerce Guys separated from Platform.sh to refocus the business around Drupal Commerce. Even as a three-person team (we're now four - welcome, Doug!), we worked hard to dedicate Bojan completely to Commerce 2.x in anticipation of DrupalCon New Orleans. As I discussed in the most recent DrupalEasy podcast, this resulted in great progress both for Commerce 2.x and for Drupal 8 itself. (It also kept us near the top of the most prolific contributors to Drupal. : )

While we're preparing to present the latest in Drupal Commerce in our session at 10:45 AM on Thursday, we're also getting ready to sprint on Commerce 2.x the full week from our booth. This will be our first opportunity to jam on code as a full team since our spinout, and we'd love to have you join us.

Look for us near the permanent coffee station (intentional) beside Platform.sh and Acro Media, our delivery affiliate in the U.S. whose partnership and vision for enterprise Drupal Commerce have been invaluable as we've rebooted our business.

If you'd like to get up to speed on the current status of development, we recommend the following resources:

Naturally, we're happy to help anyone begin to contribute to Drupal 8 / Commerce 2.x. Bojan has mastered the art of integrating developers from a variety of agencies of all skill levels into the development process so far. For an espresso or a Vieux Carré, he may even train you, too. ; )

Doug Vann: FYI: Not going to DrupalCon New Orleans 2016

Planet Drupal -

As DrupalCon New Orleans gets closer, I'm getting asked more frequently if I'm going. I'm honored to have so many hit me up and ask my directly via Twitter, LinkedIn, Skype, Google HangOut, etc.

The answer is NO. And here's why...
Business is going super-dee-duper well and I can barely breathe. I haven't mastered the art of saying NO to new gigs, so that leaves me stretched thin and working long hours. Disappearing for a week would NOT fit well into that scenario. Going to NOLA would cost me not only the $2.5K to $3K of the event, but almost that much again in lost billables. If I don't do billable things, I can't send invoices. The work I would miss could not be made up later. Those dead hours would never see the light of day again.

Is there an ROI on these Drupal trips?
I have attended 7 North American DrupalCons from 2008 to 2014. It cost a lot of money, but I am absolutely and thoroughly convinced that the ROI is incalculable. Serously... Many of the relationships I have today within the Drupal community can be traced to a DrupalCon wether it be in a session, in the hallway, in the exhibit  hall, in the hotel lobby, or at any of the numerous partys. There is no doubt in my mind that I wouldn't have the thriving business I have today had I not gone to 7 DrupalCons and many other events. In the Drupal Community, personal relationships often lead to business relationships either firectly or via referal!
I missed last year [and blogged about it] because my wife and I were closing on our first house purchase and it was taking longer than anticipated. Not to mention I was also swamped in work at the time. 

See you next year?
That is very possible. I hate the idea of missing a 3rd PreNote! And I definitely miss seeing all my friends and engaging the amazing networking opportunities. We'll see. :-)

Drupal Planet

View the discussion thread.

Hook 42: Lots of Multilingual Drupal at DrupalCon New Orleans!

Planet Drupal -

Monday, May 2, 2016

If you are interested in multilingual Drupal development, DrupalCon New Orleans is the place to be. :)

There are 5 events we've got our eyes on, so you might want to put them on your schedule too. It would be great to see some familiar faces and even better to see some new ones!


Multilingual BoFs

Multilingual Digital Experience Management

Tuesday May 10th from 11am to 12pm  |  tdc-pdm (GlobalLink)  |  Room 291

This Birds of a Feather discussion aims to cover digital experience management for multilingual sites including technologies and processes for making things smoother.

Multilingual Successes and Failures

Wednesday May 11th from 1pm to 2pm  |  smithworx (Lingotek)  |  Room 287

We will laugh and cry together as we share stories and tips on dealing with multilingual configuration in Drupal. Maybe bring some tissues if you have been working a lot with multilingual in Drupal 7!

Multilingual Sessions

Drupal 8's multilingual APIs -- integrate with all the things

Wednesday May 11th from 3:45pm to 4:45pm  |  Gábor Hojtsy (Acquia)  |  Room 260-261

If you will be creating modules, themes, or distributions in Drupal 8, then this is a talk you won't want to miss so you can make sure your code leverages all the core multilingual goodness.

The Multilingual Makeover: A side-by-side comparison of Drupal 7 and Drupal 8

Wednesday May 11th from 5pm to 6pm  |  Aimee & Kristen (Hook 42)  |  Room 260-261

If you create multilingual websites or are interested in what's all the hub-bub on how Drupal 8 is so much better than Drupal 7 for language support and translation, come check out this beginner-friendly session. If you find us after the talk, we'll give you some multilingual Drupal stickers. :)

Multilingual Sprints

Saturday May 7th to Sunday May 15th | Locations Vary Depending on Day

A contribution sprint is when the community comes together to work on core and contrib/community issues in the drupal.org issue queue. They are a lot of fun, and you learn a lot too. There are two multilingual-related sprints in New Orleans: Multilingual Migrations and Multilingual (General).

Even if you have never sprinted before, you are encouraged to attend. There is a place for all types of contribution including coding, theming, documentation, testing, UX, and review.

If you have never sprinted before, there is a First Time Sprinter Workshop to get you started with the right tools and then you can move onto the Mentored Core Sprint once you are ready.

If you have sprinted before, come to the general sprints! And, don't forget to sign up here so we make sure there is enough space:


For those sprinting on multilingual issues, we have a special multilingual t-shirt for you. Tweet at us or contact us if you want to reserve one. And, if you have sprinted on multilingual issues in the past but don't have a shirt, let us know so we can set a t-shirt aside for you too.

Multilingual Swag

Monday May 9th to Friday May 13th | Hook 42 & Lingotek | In Person and Booths 501 and 617

We'll have our coveted multilingual Drupal "hello" stickers as well as their new counterpart: "Bye!" Stop us in the hallway, swing by after one of our sessions, or stop by Booth 501 in the exhibit hall to get your swag.

If you are in need of a cool multilingual t-shirt, sprint with us (see above!). And, there are also the pretty awesome Lingotek "Tron" style glow-in-the-dark shirts at their booth (617).

If you are giving away multilingual swag, let us know and we'll add you to the list.

We hope to see you at one of these BoFs, sessions, or sprints… or maybe all of them!

Know of other fun multilingual happenings? Leave a comment or contact us.

Aimee Degnan Kristen Pol Topics: Services:

Cocomore: DrupalCamp Spain: Granada 2016 – We were there!

Planet Drupal -

Each year, a different location is picked for the DrupalCamp in Spain. The last one took place in the beautiful city of Granada. Like many times in the past, part of Cocomore’s team travelled there to attend this event and to learn from the interesting talks concerning Drupal, PHP development and tech in general.


Subscribe to Cruiskeen Consulting LLC aggregator