You don't need to go fully 'headless' to use React for parts of your Drupal site. In this tutorial, we'll use the Drupal 8 JsonAPI module to communicate between a React component and the Drupal framework.
As a simple example, we'll build a 'Favorite' feature, which allows users to favorite (or bookmark) nodes. This type of feature is typically handled on Drupal sites with Flag module, but let's say (hypothetically...) that you have come down with Drupal-module-itis. You're sick of messing around with modules and trying to customize them to do exactly what you want while also dealing with their bugs and updates. You're going custom today.
Follow along with my Github repo.Configure a custom Drupal module
First thing's first: data storage. How will we store which nodes a user has favorited? A nice and easy method is to add an entity reference field to the user entity. We can just hide this field on the 'Manage Form Display' and/or 'Manage Display' settings since we'll be creating a custom user interface for favoriting.
When you install the Favorite module in the repo, it will add the field 'field_favorites' for you to the user entity: see the config/install directory.
Next up: define the part of the page that React will replace. Typically, you will replace an HTML element defined by an id with a React component. If this is your first time hearing this, you should do some basic React tutorials. I started with one on Code Academy.Read more
We saw how general authentication works with Drupal 8 in the previous post. We shall see how the actual authentication happens when user logs in. It all begins with a humble login route in user.services.yml of the user module.
There is quite a diversity of abilities. Sight and hearing are the ones people tend to give the most thought to, but there are cognitive, learning, neurological, physical, and speech abilities that can all change how a user engages with the web. As able-bodied people age, they may experience some of the same challenges as those born with disabilities: failing eyesight, diminished hearing, carpal tunnel, and loss of mobility or dexterity.It's the law.
Under Section 508, government funded agencies must give disabled employees and members of the public access to information that is comparable to access available to others. American with Disabilities Act (ADA) “prohibits discrimination and ensures equal opportunity for persons with disabilities in employment, State and local government services, public accommodations, commercial facilities, and transportation.”
"Currently, web developers are less expensive than lawyers." ~ Mike Gifford
But, more than anything, it's the right thing to do. Accessibility promotes inclusivity and diversity on the web.
We'll walkthrough setting up your local development LAMP environment, installing Drupal 8, configuring Composer, and your first module: Simple FB Connect.
Join us for a day of action urging the FCC and Congress to preserve Title II net neutrality protections.
Stay connected with the latest news on web strategy, design, and development.Sign up for our newsletter.
Most people who use the internet take for granted the ability to easily access content and services from a variety of different sources. We watch movies and TV shows on Netflix, Hulu, Amazon, and countless other streaming video services. We listen to music on Spotify, Apple Music, Pandora, and Tidal. We communicate with others using social media and messaging apps like Facebook, Twitter, Instagram, WhatsApp, and Snapchat. We get our news and analysis from hundreds of sites representing the full spectrum of political thought in the United States and beyond.
Being able to quickly and easily publish or consume content from nearly anywhere in the world is not just one of the key reasons for the rapid growth of the internet, but also one of its founding principles.
Today, however, that principle is under threat. The United States Federal Communications Commission (FCC) is considering rolling back net neutrality protections that ensure equal access to online content for broadband internet users. Without net neutrality, internet service providers would be able to offer preferred access to some content providers, and withhold it from others.
Some of these providers also own their media companies and have a business interest in making it easier for people to view and purchase their content than someone else’s. Because most people in the United States don’t have many options when it comes to broadband internet access, losing net neutrality protections means that many consumers may no longer be able to freely choose the online services that best meet their needs.
Think about it this way: imagine a world in which Cuisinart not only made great kitchen appliances, but they also owned your local electric company. And imagine that electric company decided that they were going to charge you more for the electricity used to power any appliances that you had that were made by KitchenAid or any other other non-Cuisinart brand. You would probably argue that that’s absurd, because there’s nothing inherently different about the electricity that’s used to power a KitchenAid toaster versus one made by Cuisinart.
But that’s precisely what the broadband internet service providers opposed to net neutrality want to be able to do: prioritize traffic from their preferred content providers, making it more difficult or costly to access others. In reality, there’s no inherent difference in the traffic that comes from one site versus another; they’re all ones and zeros. Whether you choose to watch a movie on Netflix, Hulu, or FilmStruck shouldn’t be the business of your internet service provider any more than the brand of toaster you use to toast bread in the morning should be the business of your electric utility.
At Palantir, we believe that helping others discover, create, and share knowledge can help strengthen humanity. Many of the websites and online experiences that we architect, design, and build convey information that enable people to make more informed choices and in some cases, even help save lives. It’s important to us that our work is accessible to everyone regardless of what internet service provider they’re using.
That’s why we’re proud to join forces with hundreds of other sites around the web this July 12th for a day of action urging the FCC and Congress to preserve Title II net neutrality protections. You can find out more and make your voice heard at www.battleforthenet.com. We hope you’ll stand alongside us to help keep the web a free and open place.
Stay connected with the latest news on web strategy, design, and development.Sign up for our newsletter.
I was going through my email recently when I came across an article that caught my attention. It was about a team of .NET developers that learned Node.js and React while building a product. The following reflection stood out to me:The philosophy of React is totally different to anything we have worked on so far. We were in a constant fight with it. One of the problems was finding the correct source of truth. There are so many different articles and tutorials online, some of which are no longer relevant. React documentation is OK-ish, but we didn’t want to invest too much time going over it and opted for a quick start instead.
For the past two+ years I’ve worked exclusively on React projects and I’ve had my own up-and-down learning experience with it. Over that time, I’ve developed some advice for how to learn React—the resources, the sequence and the important takeaways.
What follows is a five-step plan for learning React. All of the steps point you to free resources, although there are fallbacks listed on some of the steps that are paid courses or tutorials. The free stuff will get you there, however.
Finally, we finish with an “other things to consider” section. You’ll often hear people say that React is just a view library. While that’s true to some extent, it’s also a large and vibrant ecosystem. This last section will touch on important things to know or consider that aren’t covered in the five main steps. These things aren’t essential to building something real and significant, but you should look into them as you continue learning how to build software with React.Step One - React Documentation + Code Sandbox
Yes, you should start by reading the React documentation. It’s well written and you’ll understand the essential terminology and concepts by the time you’re finished. The link I shared above points to the first section of the documentation on installation. There is a link there to a CodePen to help you get started.
An alternative that I prefer is Code Sandbox. I think it gives a better feel for a basic React application. You can use it to try things out as you work through the docs.
There is another option on that page under the “Create a New App” tab, which is to use Create React App to build a development environment on your local machine. Create React App is a great tool and perhaps using it right away will help you. Everyone has a learning approach that works best for them.
Personally, I feel it adds mental load right as you’re getting started. My advice is to stick with Code Sandbox or CodePen when you’re beginning and focus on fundamental concepts instead of the development environment. You can always circle back later.
My recommendation is to read the Quick Start and Advanced Guides sections. I know not everyone likes reading documentation, particularly those that are visual or auditory learners. If you’re struggling with getting through those, then the key areas to focus on are in the Quick Start:
- Introducing JSX
- Rendering Elements
- Components and Props
- State and Lifecycle (super important!)
- Handling Events
- Composition vs Inheritance
- Thinking In React
Again, my advice it to read the full two sections, but at a bare minimum, power through the areas in the list above. They include basic concepts that you will need.Step Two - React Fundamentals Course
After Step One, you should have a gist of what React is all about. Maybe not everything is clear, but you’re starting to see some of the concepts take shape in your mind. The next step is the React Fundamentals course from React Training. It’s free and it’s good. The instructor is Tyler McGinnis and he’s both knowledgeable and easy to follow in his instruction.
Why this course? If you’re more of a visual or auditory learner, this will help you. It covers the basics as well as introduces key things that you’ll need to build something real—like webpack and fetching remote data.
By the end of the course you will have what you need to build a basic React application. Depending on what your goals are in learning React, you may even have all the information you need upon completing this course. Most of you—those learning React to build client projects—will need to keep going.
Important point: Do the exercises in the course. Following along will benefit you much more than just watching him go through them.
There’s a lot of good stuff in this course, but you’ll want to come away with a handle on the following:
- Re-enforcement of principles from React docs
- Intro to the build tools for React projects, particularly webpack
- Understanding the this keyword
- Stateless functional components
- Routing (how you navigate from one “page” to the next)
- Fetching async data
The next step is ReactBits, a wonderful resource from Vasa at WalmartLabs. This isn’t a book, really. It’s more a series of tips - in a very useful outline format—that can help fill in the gaps left by other tutorials.
There is gold here. And it’s a resource that gets regularly updated so you can return to it as React continues to evolve. Again, I encourage you to read all the tips, but if you struggle with it, here’s what to focus on:
- Design Patterns and Techniques (most important tip is probably this)
- Anti-Patterns (if nothing else, read this section)
- Perf Tips
Another great thing about ReactBits is that each tip includes references so you can do more research on the topic yourself - or at least understand why the author thinks it’s a best practice.Architecture Interlude
Thus far we have identified resources that will teach how to create simple applications. Before we continue, however, we need to pause to consider what happens when a React application becomes more complex. As a React application grows, common problems often arise. For example, how does one share state across multiple components? How can one clean up API calls that have been scattered throughout the app as functionality has been added?
Facebook came up with an answer to these questions—the Flux architecture. Some time later, Dan Abramov created an implementation of Flux he called Redux. It has been hugely influential in the React community and beyond. Facebook liked Dan’s work and subsequently hired him as part of the React team.
I recommend you learn Redux, but you should know there are other options, most notably, MobX. I’m not going to do a Redux vs. MobX run down in this post. I will only note that the general consensus is that MobX is typically easier to learn, but Redux is better suited to larger projects.
One of the reasons Redux is viewed as especially suitable for large projects is that, much like React itself, Redux is more than the simple library it is often billed as. It is also an architectural pattern that can bring a lot of predictability to your app. On a big project with lots of moving parts (and developers), this is a tremendous asset.
One final thing to note about Redux is it also has a very robust ecosystem around it. Redux supports middleware and there are a large number of libraries that can add debugging (with time travel), data handling, user flows, authentication, routing and more.
I encourage you to learn Redux. If you take a look at it and decide it’s not right for you, then MobX may be something that will work better. These are just tools. Use what helps you.
I’m including a talk below by Preethi Kasireddy from React Conf 2017 on Redux vs. MobX so you can get a feel for the pros and cons of each.Videos require iframe browser support.
One last thing on architecture…
If it feels like Redux or MobX are too heavy for your application, consider the container component pattern. It can tidy things up by separating logic from presentation. It can help you see at a glance where API calls and other logic resides and may be all that you need to improve the organization of your app.
The documentation for Redux is good and you should start there. One thing to note is that Redux uses a functional programming style and if you’re coming from a Java or C# background, there may be some unfamiliar syntax. Don’t worry about it. If you see something weird, set it aside. After you’re through with this step and the next, you’ll have a handle on it.
There is a series of videos called Getting Started with Redux by Dan Abramov. They’re available for free on Egghead.io and they’re a good resource. I have a colleague who thought that the videos made the documentation easier to understand. If you learn better from video tutorials, then start with them, but be sure you go back to the docs. The documentation has information that will help you and is omitted in the videos.
The thing you want to have at this point is not mastery, but a basic handle on Redux terminology and concepts:
- Three principles of Redux
- Data flow
- Usage with React
- Async actions
- Async Flow
Go through these resources and if, when you’re finished, you feel like you somewhat get it, then you’re on track. Redux has an easy/hard thing to it at first. The individual pieces are mostly easy to understand. Putting them all together in your head often takes a bit more time. The resources in the next step will help with this.Step Five - The Complete Redux Book + Redux Video Series Part 2
There is a great book on Redux that can be had for free, The Complete Redux Book. This is a book written by developers who are building serious React applications. It will help you learn how to architect your application and go deep into the concepts introduced in the previous step. It will also help you understand the basics of functional programming and make working with Redux easier.
Note that this book is on LeanPub and the suggested price is $32, but you can get it for free. If you have the money, consider paying. The authors have done a very good job and it’s worth the money.
The next resource is a second video tutorial series by Dan Abramov —Building React Applications with Idiomatic Redux. There is overlap between these videos and the book. What you choose to do here will depend on how much time you have and what learning style best suits you. If you can, do both.
There is a book that runs $39 called FullStackReact. It’s $79 if you want all the code samples and a three hour screencast. I haven’t read the book, but one of the authors is Tyler McGinnis. I recommended his work in Step Two.
This might be worth a look if you have the funds. One thing I’m cautious about is the emphasis on GraphQL and Relay. Those two technologies—particularly GraphQL—are interesting. They are worth learning. However, if you are going to be building an app that uses a REST API, then maybe postpone the purchase.
Congratulations - this is the final step! At this point you should have:
- Re-enforcement of principles from Redux docs and/or first video series
- Understanding of basic functional programming principles
- Understanding of creating and writing Redux middleware
- Understanding of how to architect a React + Redux application
Of course, when it comes to software, our learning is never complete…Other Things
There are a few other things about the React ecosystem I’d like to share that can be the subject of future learning. I won’t go into too much detail, but be aware that you may run into these sooner or later, depending on your projects.
Webpack is the primary bundler tool for React applications. It’s discussed in the React Fundamentals course, but you’ll probably have to go deeper at some point. You can use another tool, but finding examples if you get stuck may be difficult. A good, free introduction is this presentation by Emil Oberg and it includes a link to the code he writes in the video.
Another good resource—not free—is Webpack 2 the Complete Developer’s Guide by Stephen Grider. This is a good course and is available on Udemy for $10-75. Udemy frequently offers discounts on courses, so you should be able to get a good deal on it.
If you need server rendering and your head feels full from too much learning, you might consider Next.js which I will discuss shortly.
Redux Saga is middleware for Redux. It acts as a single place for side effects in your application. Side effects are often asynchronous things like data fetching and keeping them contained is an important concept in functional programming—something that is big in the React community.
Using middleware like Redux Saga can help with the architecture of your application. It will certainly make writing tests easier (see Jest and Enzyme to learn more about testing React apps). The downside to Redux Saga is that it adds more mental load, particularly if you aren’t yet familiar with ES6 generators. In the long term, however, it’s a good investment. Consider learning it and other Redux middleware once you’ve got a firm handle on the content in the five steps.
Reselect is a selector library for Redux. It can help improve performance by computing derived data. For example, if you have an item in your Redux store that needs be calculated, it won’t recompute unless one of the arguments changes, preventing unnecessary re-rendering. This can be useful for shopping carts, “likes”, scoring, etc.
At the beginning of this post, I mentioned Create React App. It’s an app scaffolder. It can help you get started building an app very quickly. I leave it to you to read up on it, but one potential downside is that it doesn’t have server rendering. If you want that, you’ll have to add it on your own.
Another option is Next.js from Zeit. I haven’t used it, but it looks interesting. Maybe it could help you get started. It’s sort of a framework within a framework (React). It does a lot for you and as a result, is opinionated, but there are good docs for it. My concern would be the “black box” nature of it. I would need to understand the internals well before I felt confident using it on a client project. I’d be interested to hear from anyone that has experience with it.And Finally…
Thanks for hanging with me. This was a long post. I wish I could have have written a post called, “How to Quickly Learn React in Five Easy Steps,” but that’s not how it goes. There is a lot to learn and the learning never stops. It’s something I like about software development, but there are times it can be stressful.
With budget cuts and rising expectations, higher education websites have become a challenging balancing act of function and affordability. Late in 2016, we set out to build a website solution that would leverage a CMS to create a repeatable, flexible website solution that meets current expectations in higher ed clients, and leaves room for them to make it their own -- without requiring custom development for each client. We also wanted to ensure that it could be deployed for under $50,000; put the work of managing and maintaining the site into the hands of the content team; and have a low-recurring cost to the client.Tags: acquia drupal planet
Annually, the information technology research firm Gartner publishes its magic quadrant report comparing web content management systems (CMS) at the enterprise level. At this writing, the most recent report places Acquia/Drupal, Adobe Experience Manager (AEM), and Sitecore as the three leaders in the field, based on both their completeness of vision and their ability to execute on organizational requirements.
In the previous article Setting up Nginx on a Debian server as front-end for Apache of the series of articles for Drupal sysadmins we explained Nginx configs that allow it working through static queries while Apache serves dynamic content. This article offers a look at an alternative setup, where Php-fpm takes the place of Apache. The operating principle for our web server will be as follows:Tags
At Cloudflare our focus is making the internet faster and more secure. Today we are announcing a new enhancement to our HTTPS service: High-Reliability OCSP stapling. This feature is a step towards enabling an important security feature on the web: certificate revocation checking. Reliable OCSP stapling also improves connection times by up to 30% in some cases. In this post, we’ll explore the importance of certificate revocation checking in HTTPS, the challenges involved in making it reliable, and how we built a robust OCSP stapling service.Why revocation is hard
Digital certificates are the cornerstone of trust on the web. A digital certificate is like an identification card for a website. It contains identity information including the website’s hostname along with a cryptographic public key. In public key cryptography, each public key has an associated private key. This private key is kept secret by the site owner. For a browser to trust an HTTPS site, the site’s server must provide a certificate that is valid for the site’s hostname and a proof of control of the certificate’s private key. If someone gets access to a certificate’s private key, they can impersonate the site. Private key compromise is a serious risk to trust on the web.
Certificate revocation is a way to mitigate the risk of key compromise. A website owner can revoke a compromised certificate by informing the certificate issuer that it should no longer be trusted. For example, back in 2014, Cloudflare revoked all managed certificates after it was shown that the Heartbleed vulnerability could be used to steal private keys. There are other reasons to revoke, but key compromise is the most common.
Certificate revocation has a spotty history. Most of the revocation checking mechanisms implemented today don’t protect site owners from key compromise. If you know about why revocation checking is broken, feel free to skip ahead to the OCSP stapling section below.Revocation checking: a history of failure
There are several ways a web browser can check whether a site’s certificate is revoked or not. The most well-known mechanisms are Certificate Revocation Lists (CRL) and Online Certificate Status Protocol (OCSP). A CRL is a signed list of serial numbers of certificates revoked by a CA. OCSP is a protocol that can be used to query a CA about the revocation status of a given certificate. An OCSP response contains signed assertions that a certificate is not revoked.
Certificates that support OCSP contain the responder's URL and those that support CRLs contain a URLs where the CRL can be obtained. When a browser is served a certificate as part of an HTTPS connection, it can use the embedded URL to download a CRL or an OCSP response and check that the certificate hasn't been revoked before rendering the web page. The question then becomes: what should the browser do if the request for a CRL or OCSP response fails? As it turns out, both answers to that question are problematic.Hard-fail doesn’t work
When browsers encounter a web page and there’s a problem fetching revocation information, the safe option is to block the page and show a security warning. This is called a hard-fail strategy. This strategy is conservative from a security standpoint, but prone to false positives. For example, if the proof of non-revocation could not be obtained for a valid certificate, a hard-fail strategy will show a security warning. Showing a security warning when no security issue exists is dangerous because it can lead to warning fatigue and teach users to click through security warnings, which is a bad idea.
In the real world, false positives are unavoidable. OCSP and CRL endpoints subject to service outages and network errors. There are also common situations where these endpoints are completely inaccessible to the browser, such as when the browser is behind a captive portal. For some access points used in hotels and airplanes, unencrypted traffic (like OCSP endpoints) are blocked. A hard-fail strategy force users behind captive portals and other networks that block OCSP requests to click through unnecessary security warnings. This reality is unpalatable to browser vendors.
Another drawback to a hard-fail strategy is that it puts an increased burden on certificate authorities to keep OCSP and CRL endpoints available and online. A broken OCSP or CRL server becomes a central point of failure for all certificates issued by a certificate authority. If browsers followed a hard-fail strategy, an OCSP outage would be an Internet outage. Certificate authorities are organizations optimized to provide trust and accountability, not necessarily resilient infrastructure. In a hard-fail world, the availability of the web as a whole would be limited by the ability for CAs to keep their OCSP services online at all times; a dangerous systemic risk to the internet as a whole.Soft-fail: it’s not much better
To avoid the downsides of a hard-fail strategy, most browsers take another approach to certificate revocation checking. Upon seeing a new certificate, the browser will attempt to fetch the revocation information from the CRL or OCSP endpoint embedded in the certificate. If the revocation information is available, they rely on it, and otherwise they assume the certificate is not revoked and display the page without any errors. This is called a “soft-fail” strategy.
The soft-fail strategy has a critical security flaw. An attacker with network position can block the OCSP request. If this attacker also has the private key of a revoked certificate, they can intercept the outgoing connection for the site and present the revoked certificate to the browser. Since the browser doesn’t know the certificate is revoked and is following a soft-fail strategy, the page will load without alerting the user. As Adam Langley described: “soft-fail revocation checks are like a seat-belt that snaps when you crash. Even though it works 99% of the time, it's worthless because it only works when you don't need it.”
A soft-fail strategy also makes connections slower. If revocation information for a certificate is not already cached, the browser will block the rendering of the page until the revocation information is retrieved, or a timeout occurs. This additional step causes a noticeable and unwelcome delay, with marginal security benefits. This tradeoff is a hard sell for the performance-obsessed web. Because of the limited benefit, some browsers have eliminated live revocation checking for at least some subset of certificates.
Live OCSP checking has an additional downside: it leaks private browsing information. OCSP requests are sent over unencrypted HTTP and are tied to a specific certificate. Sending an OCSP request tells the certificate authority which websites you are visiting. Furthermore, everyone on the network path between your browser and the OCSP server will also know which sites you are browsing.Alternative revocation checking
Some client still perform soft-fail OCSP checking, but it’s becoming less common due to the performance and privacy downsides described above. To protect high-value certificates, some browsers have explored alternative mechanisms for revocation checking.
One technique is to pre-package a list of revoked certificates and distribute them through browser updates. Because the list of all revoked certificates is so large, only a few high-impact certificates are included in this list. This technique is called OneCRL by Firefox and CRLSets by Chrome. This has been effective for some high-profile revocations, but it is by no means a complete solution. Not only are not all certificates covered, this technique leaves a window of vulnerability between the time the certificate is revoked and the certificate list gets to browsers.OCSP Stapling
OCSP stapling is a technique to get revocation information to browsers that fixes some of the performance and privacy issues associated with live OCSP fetching. In OCSP stapling, the server includes a current OCSP response for the certificate included (or "stapled") into the initial HTTPS connection. That removes the need for the browser to request the OCSP response itself. OCSP stapling is widely supported by modern browsers.
Not all servers support OCSP stapling, so browsers still take a soft-fail approach to warning the user when the OCSP response is not stapled. Some browsers (such as Safari, Edge and Firefox for now) check certificate revocation for certificates, so OCSP stapling can provide a performance boost of up to 30%. For browsers like Chrome that don’t check for revocation for all certificates, OCSP stapling provides a proof of non-revocation that they would not have otherwise.High-reliability OCSP stapling
Cloudflare started offering OCSP stapling in 2012. Cloudflare’s original implementation relied on code from nginx that was able to provide OCSP stapling for a some, but not all connections. As Cloudflare’s network grew, the implementation wasn’t able to scale with it, resulting in a drop in the percentage of connections with OCSP responses stapled. The architecture we had chosen had served us well, but we could definitely do better.
In the last year we redesigned our OCSP stapling infrastructure to make it much more robust and reliable. We’re happy to announce that we now provide reliable OCSP stapling connections to Cloudflare. As long as the certificate authority has set up OCSP for a certificate, Cloudflare will serve a valid OCSP stapled response. All Cloudflare customers now benefit from much more reliable OCSP stapling.OCSP stapling past
In Cloudflare’s original implementation of OCSP stapling, OCSP responses were fetched opportunistically. Given a connection that required a certificate, Cloudflare would check to see if there was a fresh OCSP response to staple. If there was, it would be included in the connection. If not, then the client would not be sent an OCSP response, and Cloudflare would send a request to refresh the OCSP response in the cache in preparation for the next request.
If a fresh OCSP response wasn’t cached, the connection wouldn’t get an OCSP staple. The next connection for that same certificate would get a OCSP staple, because the cache will have been populated.
This architecture was elegant, but not robust. First, there are several situations in which the client is guaranteed to not get an OCSP response. For example, the first request in every cache region and the first request after an OCSP response expires are guaranteed to not have an OCSP response stapled. With Cloudflare’s expansion to more locations, these failures were more common. Less popular sites would have their OCSP responses fetched less often resulting in an even lower ratio of stapled connections. Another reason that connections could be missing OCSP responses is if the OCSP request from Cloudflare to fill the cache failed. There was a lot of room for improvement.Our solution: OCSP pre-fetching
In order to be able to reliably include OCSP staples in all connection, we decided to change the model. Instead of fetching the OCSP response when a request came in, we would fetch it in a centralized location and distribute valid responses to all our servers. When a response started getting close to expiration, we’d fetch a new one. If the OCSP request fails, we put it into a queue to re-fetch at a later time. Since most OCSP staples are valid for around 7 days, there is a lot of flexibility in term of refreshing expiring responses.
To keep our cache of OCSP responses fresh, we created an OCSP fetching service. This service ensures that there is a valid OCSP response for every certificate managed by Cloudflare. We constantly crawl our cache of OCSP responses and refresh those that are close to expiring. We also make sure to never cache invalid OCSP responses, as this can have bad consequences. This system has been running for several months now, and we are now reliably including OCSP staples for almost every HTTPS request.
Reliable stapling improves performance for browsers that would have otherwise fetched OCSP, but it also changes the optimal failure strategy for browsers. If a browser can reliably get an OCSP staple for a certificate, why not switch back from a soft-fail to a hard-fail strategy?OCSP must-staple
As described above, the soft-fail strategy for validating OCSP responses opens up a security hole. An attacker with a revoked certificate can simply neglect to provide an OCSP response when a browser connects to it and the browser will accept their revoked certificate.
In the OCSP fetching case, a soft-fail approach makes sense. There many reasons the browser would not be able to obtain an OCSP: captive portals, broken OCSP servers, network unreliability and more. However, as we have shown with our high-reliability OCSP fetching service, it is possible for a server to fetch OCSP responses without any of these problems. OCSP responses are re-usable and are valid for several days. When one is close to expiring, the server can fetch a new one out-of-band and be able to reliably serve OCSP staples for all connections.
If the client knows that a server will always serve OCSP staples for every connection, it can apply a hard-fail approach, failing a connection if the OCSP response is missing. This closes the security hole introduced by the soft-fail strategy. This is where OCSP must-staple fits in.
OCSP must-staple is an extension that can be added to a certificate that tells the browser to expect an OCSP staple whenever it sees the certificate. This acts as an explicit signal to the browser that it’s safe to use the more secure hard-fail strategy.
Firefox enforces OCSP must-staple, returning the following error if such a certificate is presented without a stapled OCSP response.
Chrome provides the ability to mark a domain as “Expect-Staple”. If Chrome sees a certificate for the domain without a staple, it will send a report to a pre-configured report endpoint.Reliability
As a part of our push to provide reliable OCSP stapling, we put our money where our mouths are and put an OCSP must-staple certificate on blog.cloudflare.com. Now if we ever don’t serve an OCSP staple, this page will fail to load on browsers like Firefox that enforce must-staple. You can identify a certificate by looking at the certificate details for the “184.108.40.206.220.127.116.11.24” OID.
Cloudflare customers can choose to upload must-staple custom certificates, but we encourage them not to do so yet because there may be a multi-second delay between the certificate being installed and our ability to populate the OCSP response cache. This will be fixed in the coming months. Other than the first few seconds after uploading the certificate, Cloudflare’s new OCSP fetching is robust enough to offer OCSP staples for every connection thereafter.
As of today, an attacker with access to the private key for a revoked certificate can still hijack the connection. All they need to do is to place themselves on the network path of the connection and block the OCSP request. OCSP must-staple prevents that, since an attacker will not be able to obtain an OCSP response that says the certificate has not been revoked.The weird world of OCSP responders
For browsers, an OCSP failure is not the end of the world. Most browsers are configured to soft-fail when an OCSP responder returns an error, so users are unaffected by OCSP server failures. Some Certificate Authorities have had massive multi-day outages in their OCSP servers without affecting the availability of sites that use their certificates.
There’s no strong feedback mechanism for broken or slow OCSP servers. This lack of feedback has led to an ecosystem of faulty or unreliable OCSP servers. We experienced this first-hand while developing high-reliability OCSP stapling. In this section, we’ll outline half a dozen unexpected behaviors we found when deploying high-reliability OCSP stapling. A big thanks goes out to all the CAs who fixed the issues we pointed out. CA names redacted to preserve their identities.CA #1: Non-overlapping periods
We noticed CA #1 certificates frequently missing their refresh-deadline, and during debugging we were lucky enough to see this:$ date -u Sat Mar 4 02:45:35 UTC 2017 $ ocspfetch <redacted, customer ID> This Update: 2017-03-04 01:45:49 +0000 UTC Next Update: 2017-03-04 02:45:49 +0000 UTC $ date -u Sat Mar 4 02:45:48 UTC 2017 $ ocspfetch <redacted, customer ID> This Update: 2017-03-04 02:45:49 +0000 UTC Next Update: 2017-03-04 03:45:49 +0000 UTC
It shows that CA #1 had configured their OCSP responders to use an incredibly short validity period with almost no overlap between validity periods, which makes it functionally impossible to always have a fresh OCSP response for their certificates. We contacted them, and they reconfigured the responder to produce new responses every half-interval.CA #2: Wrong signature algorithm
Several certificates from CA #2 started failing with this error:bad OCSP signature: crypto/rsa: verification error
The issue is that the OCSP claims to be signed with SHA256-RSA, when it is actually signed with SHA1-RSA (and the reverse: indicates SHA1-RSA, actually signed with SHA256-RSA).CA #3: Malformed OCSP responses
When we first started the project, our tool was unable to parse dozens of certificates in our database because of this errorasn1: structure error: integer not minimally-encoded
and many OCSP responses that we fetched from the same CA:parsing ocsp response: bad OCSP signature: asn1: structure error: integer not minimally-encoded
What happened was that this CA had begun issuing <1% of certificates with a minor formatting error that rendered them unparseable by Golang’s x509 package. After contacting them directly, they quickly fixed the issue but then we had to patch Golang's parser to be more lenient about encoding bugs.CA #4: Failed responder behind a load balancer
A small number of CA #4’s OCSP responders fell into a "bad state" without them knowing, and would return 404 on every request. Since the CA used load balancing to round-robin requests to a number of responders, it looked like 1 in 6 requests would fail inexplicably.
Two times between Jan 2017 and May 2017, this CA also experienced some kind of large data-loss event that caused them to return persistent "Try Later" responses for a large number of requests.CA #5: Delay between certificate and OCSP responder
When a certificate is issued by CA #5, there is a large delay between the time a certificate is issued and the OCSP responder is able to start returning signed responses. This results in a delay between certificate issance and the availability of OCSP. It has mostly been resolved, but this general pattern is dangerous for OCSP must-staple. There have been some changes discussed by the CA/B Forum, an organization that regulates the issuance and management of certificates, to require CAs to offer OCSP soon after issuance.CA #6: Extra certificates
It is typical to embed only one certificate in an OCSP response, if any. The one embedded certificate is supposed to be a leaf specially issued for signing an intermediate's OCSP responses. However, several CAs embed multiple certificates: the leaf they use for signing OCSP responses, the intermediate itself, and sometimes all the intermediates up to the root certificate.Conclusions
We made OCSP stapling better and more reliable for Cloudflare customers. Despite the various strange behaviors we found in OCSP servers, we’ve been able to consistently serve OCSP responses for over 99.9% of connections since we’ve moved over to the new system. This great work was done by Brendan McMillion and Alessandro Ghedini. This is an important step in protecting the web community from attackers who have compromised certificate private keys.
Drupal's Entity reference fields are the magic sauce that allows site builders and developers to relate different types of content. Because the fields allows builders and administrators to reference different types of content (entities), it facilitates the building of complex data models and site architectures.
Like anything in Drupal, the community takes the core tools and builds additional functionality on top. Here is a slew of modules that extend or complement the Drupal 8 core reference field so you can do even more!Entity Reference Revisions
This module is from the team that brought you the Paragraphs module. Adds an Entity Reference field type that has revision support. It's based on the core Entity Reference module but allows you to reference a specific revision of an entity.
An entity is actually part of a parent entity (with an embedded entity form). When the parent entity is updated, the referenced entity is also updated, thus the previous revision of the parent entity should still be pointing to the previous version of the entity in order to fully support revision diff and rollback.Entity Reference Override
An entity reference field paired with a text field. You can use this module to
- Override the title if you are linking to the referenced entity.
- Add an extra CSS class to the referenced entity.
- Override the default display mode for the field on an entity-by-entity basis.
Aggregate lists of referenced entities like related articles, but you want to override the name or appearance of individual items.Entity references with text
Allows you to provide custom text along with one or more entity references.
- Add a referenced author, the word "and ”, and then another referenced author.
- On another node, Add a referenced author, the words "with support from ”, and then another referenced author.
- On another node, Add a referenced author, a comma ", ”, another referenced author, the word "and ”, and then another referenced author.
This module adds a field type that allows you to select the display mode for entity reference fields. This allows an editor to select from different display modes such as Teaser, Full, or any you add.
It also includes a Selected display mode field formatter which renders the referenced entities with the selected display mode.
Allowing the administrator to change the display of related articles from a grid display mode to a list display mode.Entity Reference Views Select
This module allows you to change your Entity reference fields to be displayed as a select list, or checkboxes/radio buttons in administrative forms. It does this by allowing you to use Views as the reference method, where you can format the results giving the administrator a much better experience.
Showing an icon or thumbnail in a selectable list when referencing a list, or a referenced image.Entity Reference Tab / Accordion Formatter
This cool little module works on both Entity Reference and Entity Reference Revisions fields and provides a field formatter for displaying the referenced entity in jQuery Tabs or jQuery Accordion.
Returning multivalue Paragraphs items in tabs or accordion format.Views Entity Reference Filter
Provides a new admin friendly Views filter for configuring entity reference fields. It allows users to select the labels of the entities they want to filter on rather than manually entering the IDs.
Providing a better admin experience.Better Entity Reference Formatter
This module extends Drupal's default field formatter for entity reference fields in order to make it more flexible by introducing a new field formatter plugin. Along side the view mode option, you can also define the number of entities to return, or a specific entity, like the first, last, or offset them.
Showing the first related product referenced from another product.Entity Reference Validators
The plural in its name suggests more is coming, but currently this module adds a single validator for Entity Reference fields, the Circular reference validator. This prevents you from using the entity reference field to reference itself.
Preventing an entity reference field on node 1 from linking to node 1.Entity Reference Integrity
This interesting sounding module lists other entities that reference your entity.
It also includes a sub-module Entity Reference Integrity Enforce which will attempt to protect entities that are referenced by other entities, and not allowing its deletion.
Protect the integrity of the site by protecting referenced content.Entity Reference Quantity
This module extends the default entity reference field that includes a "Quantity" value in the field definition so you don't have to build a separate entity just to store two distinct fields.
It also includes autocomplete and dropdown field widgets that allow you to select which entity and add the quantity value.
I'll use the example from this great blog on the module: A real world example might be a deck builder for a trading card game like Magic: The Gathering or the DragonBall Z TCG. We want to reference a card from a deck entity and put in the quantity at the same time.Entity Reference Formatter
This module creates a generic field formatter for referenced entities which allows you to select the formatter based on the referenced entity in the display settings form.
When referencing custom entities that don't have view-modes of their own, you would not need to write your own custom formattersPermissions by field
This module extends the Entity Reference field that adds permissions along with the referenced entity. By adding this field, you can manage access to the entities referenced by this field and select permission level (none, view, update, delete).
A lighter version of Organic Groups, or the Group modules.Views Reference Field
In Drupal 8, Views are now entities! Drupal core's Entity Reference field is able to reference Views, however you can't actually reference the Views displays. This module extends core's entity reference module to add the display ID so that a View can be rendered in a field formatter.
Adding a Views reference field to a Paragraphs bundle so you can have a view in and around other paragraph bundles. We implemented this technique in the Bootstrap Paragraphs module.Dynamic Entity Reference
So cool! This awesome module adds the functionality to let you reference more than one entity type. It create a single field in which you can reference Users, Blocks, Nodes, Contact Forms, Taxonomy Terms, etc!
Creating a "related” field, and allowing your administrators to select anything they want.Entity Reference Drag & Drop
This module creates a Drag & Drop widget for the standard Entity reference fields. It provides you with a list of available entities on the left, and you can select them by dragging and dropping them to the list on the right.
Providing a better admin experience.
Thanks to Mike Acklin for his help with this article, and to all the awesome module maintainers and contributors!
On January 27, 2008, the first RC followed, with boatloads of new features. Over the years, it was ported to Drupal 61, 7 and 8 and gained more features (I effectively added every single feature that was requested — I loved empowering the site builder). I did the same with my Hierarchical Select module.
I was a Computer Science student for the first half of those 9.5 years, and it was super exciting to see people actually use my code on hundreds, thousands and even tens of thousands of sites! In stark contrast with the assignments at university, where the results were graded, then discarded.Frustration
Unfortunately this approach resulted in feature-rich modules, with complex UIs to configure them, and many, many bug reports and support requests, because they were so brittle and confusing. Rather than making the 80% case simple, I supported 99% of needed features, and made things confusing and complex for 100% of the users.
Main CDN module configuration UI in Drupal 7.
In my job in Acquia’s Office of the CTO, my job is effectively “make Drupal better & faster”.
In 2012–2013, it was improving the authoring experience by adding in-place editing and tightly integrating CKEditor. Then it shifted in 2014 and 2015 to “make Drupal 8 shippable”, first by working on the cache system, then on the render pipeline and finally on the intersection of both: Dynamic Page Cache and BigPipe. After Drupal 8 shipped at the end of 2015, the next thing became “improve Drupal 8’s REST APIs”, which grew into the API-First Initiative.
All this time (5 years already!), I’ve been helping to build Drupal itself (the system, the APIs, the infrastructure, the overarching architecture), and have seen the long-term consequences from both up close and afar: the concepts required to understand how it all works, the APIs to extend, override and plug in to. In that half decade, I’ve often cursed past commits, including my own!
That’s what led to:
- my insistence that the dynamic_page_cache and big_pipe modules in Drupal 8 core do not have a UI, nor any configuration, and rely entirely on existing APIs and metadata to do their thing (with only a handful of bug reports in >18 months!)
- my “Backwards Compatibility: Burden & Benefit” talk a few months ago
- and of course this rewrite of the CDN module
I started porting the CDN module to Drupal 8 in March 2016 — a few months after the release of Drupal 8. It is much simpler to use (just look at the UI). It has less overhead (the UI is in a separate module, the altering of file URLs has far simpler logic). It has lower technical complexity (File Conveyor support was dropped, it no longer needs to detect HTTP vs HTTPS: it always uses protocol-relative URLs, less unnecessary configurability, the farfuture functionality no longer tries to generate file and no longer has extremely detailed configurability).
In other words: the CDN module in Drupal 8 is much simpler. And has much better test coverage too. (You can see this in the tarball size too: it’s about half of the Drupal 7 version of the module, despite significantly more test coverage!)
CDN UI module in Drupal 8.
- all the fundamentals
- the ability to use simple CDN mappings, including conditional ones depending on file extensions, auto-balancing, and complex combinations of all of the above
- preconnecting (and DNS prefetching for older browsers)
- a simple UI to set it up — in fact, much simpler than before!
- the CDN module now always uses protocol-relative URLs, which means there’s no more need to distinguish between HTTP and HTTPS, which simplifies a lot
- the UI is now a separate module
- the UI is optional: for power users there is a sensible configuration structure with strict config schema validation
- complete unit test coverage of the heart of the CDN module, thanks to D8’s improved architecture
- preconnecting (and DNS prefetching) using headers rather than tags in , which allows a much simpler/cleaner Symfony response subscriber
- tours instead of advanced help, which very often was ignored
- there is nothing to configure for the SEO (duplicate content prevention) feature anymore
- nor is there anything to configure for the Forever cacheable files feature anymore (named Far Future expiration in Drupal 7), and it’s a lot more robust
- File Conveyor support
- separate HTTPS mapping (also mentioned above)
- all the exceptions (blacklist, whitelist, based on Drupal path, file path…) — all of them are a maintenance/debugging/cacheability nightmare
- configurability of SEO feature
- configurability of unique file identifiers for the Forever cacheable files feature
- testing mode
For very complex mappings, you must manipulate cdn.settings.yml — there’s inline documentation with examples there. Those who need the complex setups don’t mind reading three commented examples in a YAML file. This used to be configurable through the UI, but it also was possible to configure it “incorrectly”, resulting in broken sites — that’s no longer possible.
There’s comprehensive test coverage for everything in the critical path, and basic integration test coverage. Together, they ensure peace of mind, and uncover bugs in the next minor Drupal 8 release: BC breaks are detected early and automatically.The results after 8 months: contributed module maintainer bliss
The first stable release of the CDN module for Drupal 8 was published on December 2, 2016. Today, I released the first patch release: cdn 8.x-3.1. The change log is tiny: a PHP notice fixed, two minor automated testing infrastructure problems fixed, and two new minor features added.
We can now compare the Drupal 7 and 8 versions of the CDN module:
- 149 support requests for the Drupal 7 version, with 14 in the last 12 months (the module is stable now after all these years of course) and 83 bug reports over 6.5 years (78 months), with ~6000 sites using it.
- 7 support requests for the Drupal 8 version in the last 8 months and 1 bug report (a bug in a test). With ~500 sites using it.
In other words: maintaining this contributed module now requires pretty much zero effort!Conclusion
For your own Drupal 8 modules, no matter if they’re contributed or custom, I recommend a few key rules:
- Selective feature set.
- Comprehensive unit test coverage for critical code paths (UnitTestCase)2 + basic integration test coverage (BrowserTestBase) maximizes confidence while minimizing time spent.
- Don’t provide/build APIs (that includes hooks) unless you see a strong use case for it. Prefer coarse over granular APIs unless you’re absolutely certain.
- Avoid configurability if possible. Otherwise, use config schemas to your advantage, provide a simple UI for the 80% use case. Leave the rest to contrib/custom modules.
This is more empowering for the Drupal site builder persona, because they can’t shoot themselves in the foot anymore. It’s no longer necessary to learn the complex edge cases in each contributed module’s domain, because they’re no longer exposed in the UI. In other words: domain complexities no longer leak into the UI.
At the same time, it hugely decreases the risk of burnout in module maintainers!
And of course: use the CDN module, it’s rock solid! :)Related reading
Finally, read Amitai Burstein’s “OG8 Development Mindset”! He makes very similar observations, albeit about a much bigger contributed module (Organic Groups). Some of my favorite quotes:
- About edge cases & complexity:
I think there is another hidden merit in tests. By taking the time to carefully go over your own code - and using it - you give yourself some pause to think about the necessity of your recently added code. Do you really need it? If you are not afraid of writing code and then throwing it out the window, and you are true to yourself, you can create a better, less complex, and polished module.
2. About tests:
One of the mistakes that I feel made in OG7 was exposing a lot of the advanced functionality in the UI. […] But these are all advanced use cases. When thinking about how to port them to OG8, I think found the perfect solution: we did’t port it.
3. About feature set & UI:
Unit tests in Drupal 8 are wonderful, they’re nigh impossible in Drupal 7. They finish running in seconds. ↩︎
- AttachmentSize CDN UI module version 3.0-rc2 on Drupal 830.62 KB
We at Cloudflare strongly believe in network neutrality, the principle that networks should not discriminate against content that passes through them. We’ve previously posted on our views on net neutrality and the role of the FCC here and here.
In May, the FCC took a first step toward revoking bright-line rules it put in place in 2015 to require ISPs to treat all web content equally. The FCC is seeking public comment on its proposal to eliminate the legal underpinning of the 2015 rules, revoking the FCC's authority to implement and enforce net neutrality protections. Public comments are also requested on whether any rules are needed to prevent ISPs from blocking or throttling web traffic, or creating “fast lanes” for some internet traffic.
To raise awareness about the FCC's efforts, July 12th will be “Internet-Wide Day of Action to save Net Neutrality.” Led by the group Battle for the Net, participating websites will show the world what the web would look like without net neutrality by displaying an alert on their homepage. Website users will be encouraged to contact Congress and the FCC in support of net neutrality.
We wanted to make sure our users had an opportunity to participate in this protest. If you install the Battle For The Net App, your visitors will see one of four alert modals — like the “spinning wheel of death” — and have an opportunity to submit a comment to the FCC or a letter to Congress in support of net neutrality. You can preview the app live on your site, even if you don’t use Cloudflare yet.
To participate, install the Battle For The Net App. The app will appear for your site's visitors on July 12th, the Day of Action for Net Neutrality.
We finally put a day thanks to a visit from Enzo. We are calling it a "PicNic" and not a camp due to the extend it'll have, it will be two days:
- On Saturday 22nd we'll have an sprint with mentoring
- On Monday 24th there will be technical conferences
A quick tip for all Drupalistas outhere: if you want to use jQuery.cookie in your project, you actually don't have to download and install the library. jQuery.cookie is a part of Drupal 7 and can be included as easy as typing:
- drupal_add_library('system', 'jquery.cookie');
Wondering to ...
Coming into a new company, and a new distributed company, in particular, can be overwhelming. When you start a job in a physical building, often someone in the office takes you under their wing (you hope) or is asked to give you a tour. Making connections with people while trying to master new tools and processes is hard. Having one person who is not just answering questions, but perhaps takes you out to lunch or sits down with you over coffee, offers a safe place to begin exploring the new world around you. This person can become your go-to as you acclimate yourself to the general workflow in the office.
Akin to that, Lullabot has found a way to embrace this practice in a distributed fashion. We assign each new hire with a “Lullabuddy.” Their job is to create a safe environment for a new employee to ask any question about their new job with confidence.
We introduce new hires to their Lullabuddy in their “welcome email,” and the Lullabuddy takes it from there. The Lullabuddy either answers the new hire's questions or points them to the person who can. We often compare starting a new job at Lullabot to being thrown into the deep end of a pool as there’s no gentle wading into a distributed company. We never want a new hire to feel lost or alone like they’re drowning in logins and novel communication methods. By giving each new hire a Lullabuddy, Lullabot also hopes to preclude a sense of isolation that sometimes occurs when a person starts a new job from home. It's one of the most effective things that Lullabot has found to do to foster humanity across time zones and distance.
Over the past few years, we’ve put together a few guidelines that ensure a good fit in a Lullabuddy.
- Remembers what it’s like to be a new hire but has enough experience to guide.
- Is not your boss and not already a friend, but is your first friend at Lullabot.
- Takes initiative to check in regularly.
- When possible is working on the same initial project.
- Is not on vacation in the new hire’s first two weeks (heh, we only had to learn that once.)
- Is available; has time to convey empathy and warmth.
- Is a good listener; can also read between the lines.
- Takes notice of the new hire’s Yammer/Slack participation and makes comments.
- Explains memes and inside jokes.
- Sets a good example in their communication.
- Is like a good driving instructor (tells you what to focus on and what to tune out.)
- Lets management know if something is amiss.
- Understands and transmits Lullabot’s core values.
Any Lullabot can request to be a Lullabuddy; we just ask that they have the bandwidth in their schedule to commit to the responsibilities. It’s a bonus when the Lullabuddy and the new employee are close geographically. An in-person sync-up for dinner or coffee (on Lullabot) is a wonderful way to start a new job. (Yes, we admit, there's no substitute for face-to-face contact.)
We have evolved the Lullabuddy role so that it follows a checklist in BambooHR with a few tasks. Here's what it looks like:undefined
We like to go over the expectations before a team member commits to the role, so they are clear on what they're signing up to do. The primary goal is to support our new hire so that it may look a little different depending on an individual employee’s needs. (In addition, we expect human resources to be everyone’s Lullabuddy, from when they first start until they leave Lullabot.)
In an increasingly digital world, fostering the human connection has become extremely important to Lullabot. Our “Be Human” core value emphasizes our belief that it’s these human connections that “bring our work to life.” Could we start working at Lullabot without Lullabuddies? Yes. Would we feel as supported and integrated into the team? Probably not. Having a Lullabuddy is important not just for new hires, but being a Lullabuddy is important for seasoned team members as well. The Lullabuddy practice is one of the ways Lullabot ensures human connections every day.
A Lullabuddy is for life! That’s not an official stance, but it warms my HR heart to hear people call out to each other, “Hi Lullabuddy!” years after they started at Lullabot.