Drupal News

Appnovation Technologies: Simple Website Approach Using a Headless CMS: Part 1

Planet Drupal -

Simple Website Approach Using a Headless CMS: Part 1 I strongly believe that the path for innovation requires a mix of experimentation, sweat, and failure. Without experimenting with new solutions, new technologies, new tools, we are limiting our ability to improve, arresting our potential to be better, to be faster, and sadly ensuring that we stay rooted in systems, processes and...

erdfisch: Drupalcon mentored core sprint - part 2 - your experience as a sprinter

Planet Drupal -

Drupalcon mentored core sprint - part 2 - your experience as a sprinter 12.05.2018 Michael Lenahan Body:  Drupalcon mentored core sprint - part 2 - your experience as a sprinter

Hello! You've arrived at part 2 of a series of 3 blog posts about the Mentored Core Sprint, which traditionally takes place every Friday at Drupalcon.

If you haven't already, please go back and read part 1.

You may think sprinting is not for you ...

So, you may be the kind of person who usually stays away from the Sprint Room at Drupal events. We understand. You would like to find something to work on, but when you step in the room, you get the feeling you're interrupting something really important that you don't understand.

It's okay. We've all been there.

That's why the Drupal Community invented the Mentored Core Sprint. If you stay for this sprint day, you will be among friends. You can ask any question you like. The venue is packed with people who want to make it a useful experience for you.

Come as you are

All you need in order to take part in the first-time mentored sprint are two things:

  • Your self, a human who is interested in Drupal
  • Your laptop

To get productive, your laptop needs a local installation of Drupal. Don't have one yet? Well, it's your lucky day because you can your Windows or Mac laptop set up at the first-time setup workshop!

Need a local Drupal installation? Come to the first-time setup workshop

After about half an hour, your laptop is now ready, and you can go to the sprint room to work on Drupal Core issues ...

You do not need to be a coder ...

You do not need to be a coder to work on Drupal Core. Let's say, you're a project manager. You have skills in clarifying issues, deciding what needs to be done next, managing developers, and herding cats. You're great at taking large problems and breaking them down into smaller problems that designers or developers can solve. This is what you do all day when you're at work.

Well, that's also what happens here at the Major Issue Triage table!

But - you could just as easily join any other table, because your skills will be needed there, as well!

Never Drupal alone

At this sprint, no-one works on their own. You work collaboratively in a small group (maybe 3-4 people). So, if you don't have coding or design skills, you will have someone alongside you who does, just like at work.

Collaborating together, you will learn how the Drupal issue queue works. You will, most likely, not fix any large issues during the sprint.

Learn the process of contributing

Instead, you will learn the process of contributing to Drupal. You will learn how to use the issue queue so you can stay in touch with the friends you made today, so that you fix the issue over the coming weeks after Drupalcon.

It's never too late

Even if you've been in the Drupal community for over a decade, just come along. Jump in. You'll enjoy it.

A very welcoming place to start contributing is to work on Drupal documentation. This is how I made my first contribution, at Drupalcon London in 2011. In Vienna, this table was mentored by Amber Matz from Drupalize.Me.

This is one of the most experienced mentors, Valery Lourie (valthebald). We'll meet him again in part 3, when we come to the Drupalcon Vienna live commit.

Here's Dries. He comes along and walks around, no one takes any notice because they are too engaged and too busy. And so he gets to talk to people without being interrupted.

This is what Drupal is about. It's not about the code. It's about the people.

Next time. Just come. As a sprinter or a mentor. EVERYONE is welcome, we mean that.

This is a three-part blog post series:
Part one is here
You've just finished reading part two
Part three is coming soon

Credit to Amazee Labs and Roy Segall for use of photos from the Drupalcon Vienna flickr stream, made available under the CC BY-NC-SA 2.0 licence.

Schlagworte/Tags:  planet drupal-planet drupalcon mentoring code sprint Ihr Name Kommentar/Comment Kommentar hinzufügen/Add comment Leave this field blank

March User Group, Advantage Labs @ Flock - Dealing with SPAM D7 & D8

Twin Cities Drupal Group -

Start:  2018-03-21 18:00 - 21:00 America/Chicago Organizers:  jerdavis stpaultim Les Lim maryannking Event type:  User group meeting

YOU are invited to the March Drupal JAM Session and User Group meetup!


6:00pm: JAM Session
Co-working and networking. Show off your latest projects, or bring your questions and we'll do our best to answer them informally. We have space to hang-out and work with others.

7:00pm: Group topic: Dealing with SPAM in Drupal 7 and Drupal 8

Maryann King is going to introduce the topic and share some of her own recent research on this topic, but this is intended to be a discussion where everyone can share what they are currently doing and how well it is working for them.

And as always, we'll make room for short lightning talks if there's something you'd like to share! Post in the comments below if you're interested.

Currently on deck for lightning talks:

  • Drupal "Out of the Box" initiative update - Tim Erickson
  • 360 Images on Google Street View - Paul Lampland

Advantage Labs at FLOCK
2611 1st Avenue South
Minneapolis, MN, 55408

Pizza and beverages sponsored by Advantage Labs!

Next month's topic

If you have ideas for future topics you would like to see OR if you would like to present a topic, please sign up on our presenter doc:


Tel Aviv, Israel: Cloudflare's 135th Data Center Now Live!

Cloudflare Blog -

Our newest data center is now live in Tel Aviv, Israel! This expands our global network even further to span 135 cities across 68 countries.

High-Tech in Israel

Although Israel will only be turning 70 this year, it has a history so rich we’ll leave it to the textbooks. Despite its small size, and young age, Israel is home to one of the largest tech scenes, often referred to as Start-up Nation.

Haifa’s Matam technology park houses a few tech giants’ offices including Intel, Apple, Elbit, Google, IBM, Microsoft, Yahoo, Philips and more. Meanwhile, Tel Aviv serves a true hipster capital, with a high concentration of great coffee shops to serve its many startup employees and founders.

Some brag-worthy Israeli inventions include flash drives, Waze and cherry tomatoes. This is due to Israel’s excellent education. Israel is home of the top universities in the world, bringing Israel to be one of the top five nations in scientific publication per capita output. Israel also has one of the highest PhD and MD degrees per capita, and among of the highest nobel laureates per capita as well. Israeli mothers, your nagging has paid off.

CC BY-SA 4.0 image by Rita Kozlov (the author)

Native born Israelis are nicknamed “Sabras”, after a cactus fruit: prickly on the outside, soft and sweet on the inside. Indeed, if you find yourself wandering the markets in Israel, or waiting in line for a falafel, you will hear much shouting and bargaining. However, once an Israeli has let you into their home, you will find yourself met with warm hospitality.

Even more cities

Next week, we'll announce additional deployments to help make the Internet even faster.

The Cloudflare Global Anycast Network

This map reflects the network as of the publish date of this blog post. For the most up to date directory of locations please refer to our Network Map on the Cloudflare site.

Mexico City, Mexico: Cloudflare Data Center #134

Cloudflare Blog -

¡Mexicanos! ¡Viva México! ¡Viva México! ¡Viva México! No, no es el 16 de septiembre (Día de la Independencia de México). Sin embargo aquí en Cloudflare celebramos la introducción de nuestro Datacenter #134 en la Ciudad De México. Este Datacenter marca nuestra entrada en la nación Azteca. Anteriormente el tráfico de México era servido desde algunos de nuestros otros Datacenters (principalmente McAllen, TX, Dallas, TX y Los Angeles, CA).

Mexicans! Long Live Mexico! Long Live Mexico! Long Live Mexico! No, its not the 16th of September (Mexico Independence Day). However at Cloudflare we are proud to introduce our Datacenter #134 located in Mexico City Mexico. This data center marks our entrance into the Aztec Nation. Prior to this, traffic to Mexico was served from some of our other datacenters (Primarily Mcallen, TX, Dallas, TX and Los Angeles, CA).

El área metropolitana de México cuenta con mas de 21 millones de habitantes quienes desde hoy podrán difrutar más rápido acesso a más de 7 millones de Sitios y aplicaciones en Internet servidos por Cloudflare. Este sera nuestro décimo centro de datos en la region de América Latina y El Caribe.

The Mexico City Metropolitan Areas have more than 21 million inhabitants, who from today will enjoy faster access to more than 7 million websites which are served and accelerated by Cloudflare. This is our 10th Datacenter in the Latin America Region.

CC BY-NC 2.0 image by kevin53

Para nuestro despliegue en México agradecemos a Kio Networks. Adicionalmente muy pronto estaremos interconectando con el IXP México. Si usted vive en Guadalajara, Monterrey o Queretaro esten al pendiente que tendremos buenas noticias para ustedes tambien.

For this deployment we would like to thank our partner from Kio Networks and we will also be interconnecting to the Mexico IXP If you live in Guadalajara, Monterrey or Queretaro, stay tuned more great news coming your way

El equipo de Cloudflare de América Latina estará presente en el evento E-Retail Day el 15 de marzo en el Sheraton Maria Isabel Hotel. Les invitamos a asistir y platicar con nuestro equipo.

The Cloudflare Latin America Team will be present in Mexico City for the E-Retail Day Event the 15 of March at the Sheraton Maria Isabel Hotel. We invite you to attend and chat with our team.

The Cloudflare Global Anycast Network

This map reflects the network as of the publish date of this blog post. For the most up to date directory of locations please refer to our Network Map on the Cloudflare site.

Backup Strategies for 2018

Lullabot -

A few months ago, CrashPlan announced that they were terminating service for home users, in favor of small business and enterprise plans. I’d been a happy user for many years, but this announcement came along with more than just a significant price increase. CrashPlan removed the option for local computer-to-computer or NAS backups, which is key when doing full restores on a home internet connection. Also, as someone paying month-to-month, they gave me 2 months to migrate to their new service or cancel my account, losing access to my historical cloud backups that may only be 3 or more months old.

I was pretty unhappy with how they handled the transition, so I started investigating alternative software and services.

The Table Stakes

These are the basics I expect from any backup software today. If any of these were missing, I went on to the next candidate on my list. Surprisingly, this led to us updating our security handbook to remove recommendations for both Backblaze and Carbonite as their encryption support is lacking.

Backup encryption

All backups should be stored with zero-knowledge encryption. In other words, a compromise of the backup storage itself should not disclose any of my data. A backup provider should not require storing any encryption keys, even in escrow.

Block-level deduplication at the cloud storage level

I don’t want to ever pay for the storage of the same data twice. Much of my work involves large archives or duplicate code shared across multiple projects. Local storage is much cheaper, so I’m less concerned about the costs there.

Block-level deduplication over the network

Like all Lullabots, I work from home. That means I’m subject to an asymmetrical internet connection, where my upload bandwidth is significantly slower compared to my download bandwidth. For off-site backup to be effective for me, it must detect previously uploaded blocks and skip uploading them again. Otherwise, the weeks it could take for an initial backup could take months and never finish.

Backup archive integrity checks

Since we’re deduplicating our data, we really want to be sure it doesn't have errors in it. Each backup and its data should have checksums that can be verified.

Notification of errors and backup status over email

The only thing worse than no backups is silent failures of a backup system. Hosted services should monitor clients for backups, and email when they don’t back up for a set period of time. Applications should send emails or show local notifications on errors.

External drive support

I have an external USB hard drive I use for archived document storage. I want that to be backed up to the cloud and for backups to be skipped (and not deleted) when it’s disconnected.

The Wish List

Features I would really like to have but could get by without.

  1. Client support for macOS, Linux, and Windows. I’ll deal with OS-specific apps if I have to, but I liked how CrashPlan covered almost my entire backup needs for my Mac laptop, a Windows desktop, and our NAS.
  2. Asymmetric encryption instead of a shared key. This allows backup software to use a public key for most operations, and keep the private key in memory only during restores and other operations.
  3. Support for both local and remote destinations in the same application.
  4. “Bare metal” support for restores. There’s nothing better than getting a replacement computer or hard drive, plugging it in, and coming back to an identical workspace from before a loss or failure.
  5. Monitoring of files for changes, instead of scheduled full-disk re-scans. This helps with performance and ensure backups are fresh.
  6. Append-only backup destinations, or versioning of the backup destination itself. This helps to protect against client bugs modifying or deleting old backups and is one of the features I really liked in CrashPlan.
My Backup Picks Arq for macOS and Windows Cloud Backup

Arq Backup from Haystack software should meet the needs of most people, as long as you are happy with managing your own storage. This could be as simple as Dropbox or Google Drive, or as complex as S3 or SFTP. I ended up using Backblaze B2 for all of my cloud storage.

Arq is an incredibly light application, using just a fraction of the system resources that CrashPlan used. CrashPlan would often use close to 1GB of memory for its background service, while Arq uses around 60MB. One license covers both macOS and Windows, which is a nice bonus.

See Arq’s documentation to learn how to set it up. For developers, setting up exclude patterns significantly helps with optimizing backup size and time. I work mostly with PHP and JavaScript, so I ignore vendor and node_modules. After all, most of the time I’ll be restoring from a local backup, and I can always rebuild those directories as needed.


Arq on Windows is clearly not as polished as Arq on macOS. The interface has some odd bugs, but backups and restores seem solid. You can restore macOS backups on Windows and vice-versa, though some metadata and permissions will be lost in the process. I’m not sure I’d use Arq if I worked primarily in Windows. However, it’s good enough that for me it wasn't’ worth the time and money to set up something else.

Arq is missing Linux client support, though it can back up to any NAS over a mount or SFTP connection.

Like many applications in this space, theoretically, the client can corrupt or delete your existing backups. If this is a concern, be sure to set up something like Amazon S3’s lifecycle rules to preserve your backup set for some period of time via server-side controls. This will increase storage costs slightly but also protects against bugs like this one that mistakenly deleted backup objects.

There are some complaints about issues restoring backups. However, it seems like there are complaints about every backup tool. None of my Arq-using colleagues have ever had trouble. Since I’m using different tools for local backups, and my test restores have all worked perfectly, I’m not very concerned. This post about how Arq blocks backups during verification is an interesting (if overly ranty) read and may matter if you have a large dataset and a very slow internet connection. For comparison, my backup set is currently around 50 GB and validated in around 30 minutes over my 30/5 cable connection.

Time Machine for macOS Local Backup

Time Machine is really the only option on MacOS for bare-metal restores. It supports filesystem encryption out of the box, though backups are file level instead of block level. It’s by far the easiest backup system I’ve ever used. Restores can be done through Internet Recovery or through the first-run setup wizard on a new Mac. It’s pretty awesome when you can get a brand-new machine, start a restore, and come back to a complete restore of your old environment, right down to open applications and windows.

Time Machine Network backups (even to a Time Capsule) are notoriously unreliable, so stick with an external hard drive instead. Reading encrypted backups is impossible outside of macOS, so have an alternate backup system in place if you care about cross-OS restores.

File History Windows Local Backup

I set up File History for Windows in Bootcamp and a Windows desktop. File History can back up to an external drive, a network share, or an iSCSI target (since those just show up as additional disks). Network shares do not support encryption with BitLocker, so I set up iSCSI by following this guide. This works perfectly for a desktop that’s always wired in. For Bootcamp on my Mac, I can’t save the backup password securely (because BitLocker doesn’t work with Bootcamp), so I have to remember to enter it on boot and check backups every so often.

Surprisingly, it only backs up part of your user folder by default, so watch for any Application Data folders you want to add to the backup set.

It looked like File History was going to be removed in the Fall Creator’s Update, but it came back before the final release. Presumably, Microsoft is working on some sort of cloud-backup OneDrive solution for the future. Hopefully, it keeps an option for local backups too.

Duply + Duplicity for Linux and NAS Cloud Backup

Duply (which uses duplicity behind the scenes) is currently the best and most reliable cloud backup system on Linux. In my case, I have an Ubuntu server I use as a NAS. It contains backups of our computers, as well as shared files like our photo library. Locally, it uses RAID1 to protect against hardware failure, LVM to slice volumes, and btrfs + snapper to guard against accidental deletions and changes. Individual volumes are backed up to Backblaze B2 with Duply as needed.

Duplicity has been in active development for over a decade. I like how it uses GPG for encryption. Duplicity is best for archive backups, especially for large static data sets. Pruning old data can be problematic for Duplicity. For example, my photo library (which is also uploaded to Google Photos) mostly adds new data, with deletions and changes being rare. In this case, the incremental model Duplicity uses isn’t a problem. However, Duplicity would totally fall over backing up a home directory for a workstation, where the data set could significantly change each day. Arq and other backup applications us a “hash backup” strategy, which is roughly similar to how Git stores data.

I manually added a daily cron job in /etc/cron.daily/duply that backs up each data set:

#!/bin/bash find /etc/duply -mindepth 1 -maxdepth 1 -exec duply \{} backup \;

Note that if you use snapper, duplicity will try to back up the .snapshots directory too! Be sure to set up proper excludes with duply:

# although called exclude, this file is actually a globbing file list # duplicity accepts some globbing patterns, even including ones here # here is an example, this incl. only 'dir/bar' except it's subfolder 'foo' # - dir/bar/foo # + dir/bar # - ** # for more details see duplicity manpage, section File Selection # http://duplicity.nongnu.org/duplicity.1.html#sect9 - **/.cache - **/.snapshots

One more note; Duplicity relies on a cache of metadata that is stored in ~/.cache/duplicity. On Ubuntu, if you run sudo duplicity, $HOME will be that of your current user account. If you run it with cron or in a root shell with sudo -i, it will be /root. If a backup is interrupted, and you switch the method you used to elevate to root, backups may start from the beginning again. I suggest always using sudo -H to ensure the cache is the same as what cron jobs use.

About Cloud Storage Pricing

All of my finalist backup applications didn't offer any sort of cloud storage. Instead, they support a variety of providers including AWS, Dropbox, and Google Drive. If your backup set is small enough, you may be able to use storage you already get for free. Pricing changes fairly often, but this chart should serve as a rough benchmark between providers. I’ve included the discontinued CrashPlan unlimited backup as a benchmark.


I ended up choosing Backblaze B2 as my primary provider. They offered the best balance of price, durability, and ease of use. I’m currently paying around $4.20 a month for just shy of 850GB of storage. Compared to Amazon Glacier, there’s nothing special to worry about for restores. When I first set up in September, B2 had several days of intermittent outages, with constant 503s. They’ve been fine in the months since, and changing providers down the line is fairly straightforward with Rclone. Several of my colleagues use S3 and Google’s cloud storage and are happy with them.

Hash Backup Apps are the Ones to Watch

There are several new backup applications in the “hash backup” space. Arq is considered a hash-backup tool, while Duplicity is an incremental backup tool. Hash backup tools have blocks and stores them (similar to how Git works), while other backup tools use a different model with an initial backup and then a chain of changes (like CVS or Subversion). Based on how verification and backups appeared to work, I believe CrashPlan also used a hash model.

Hash Backups Incremental Backups Garbage collection of expired backups is easy, as you just delete unreferenced objects. Deleting a backup in the middle of a backup timeline is also trivial. Deleting expired data requires creating a new “full” backup chain from scratch. Deduplication is easy since each block is hashed and stored once. Deduplication isn’t a default part of the architecture (but is possible to include) Data verification against a client can be done with hashes, which cloud providers can send via API responses, saving download bandwidth. Data verification requires downloading the backup set and comparing against existing files. Possible to deduplicate data shared among multiple clients.

Deduplication between clients requires a server in the middle.

I tried several of these newer backup tools, but they were either missing cloud support or did not seem stable enough yet for my use.


BorgBackup has no built-in cloud support but can store remote data with SSH. It’s best if the server end can run Borg too, instead of just being a dumb file store. As such, it’s expensive to run, and wouldn’t protect against ransomware on the server.

While BorgBackup caches scan data, it walks the filesystem instead of monitoring it.

It’s slow-ish for initial backups as it only processes files one at a time, not in parallel. 1.2 hopes to improve this. It took around 20 minutes to do a local backup of my code and vagrant workspaces (lots of small files, ~12GB) to a local target. An update backup (with one or two file changes) took ~5 minutes to run. This was on a 2016 MacBook Pro with a fast SSD and an i7 processor. There’s no way it would scale to backing up my whole home directory.

I thought about off-site syncing to S3 or similar with Rclone. However, that means restoring the whole archive to restore. It also doubles your local storage space requirements - for example, on my NAS I want to back up photos only to the cloud since the photos directory itself is a backup.


Duplicacy is an open-source but not free-software licensed backup tool. It’s obviously more open than Arq, but not comparable to something like Duplicity. I found it confusing that “repository” in it’s UI is the source of the backup data, and not the destination, unlike every other tool I tested. It intends for all backup clients to use the same destination, meaning that a large file copied between two computers will only be stored once. That could be a significant cost saving depending on your data set.

However, Duplicacy doesn’t back up macOS metadata correctly, so I can’t use it there. I tried it out on Linux, but I encountered bugs with permissions on restore. With some additional maturity, this could be the Arq-for-Linux equivalent.


Duplicati is a .Net application, but supported on Linux and macOS with Mono. The stable version has been unmaintained since 2013, so I wasn’t willing to set it up. The 2.0 branch was promoted to “beta” in August 2017, with active development. Version numbers in software can be somewhat arbitrary, and I’m happy to use pre-release version numbers that have been around for years with good community reports. Such a recent beta gave me pause on using this for my backups. Now that I’m not rushing to upload my initial backups before CrashPlan closed my account, I hope to look at this again.


HashBackup is in beta (but has been in use since 2010), and is closed source. There’s no public bug tracker or mailing list so it’s hard to get a feel for its stability. I’d like to investigate this further for my NAS backups, but I felt more comfortable using Duplicity as a “beta” backup solution since it is true Free Software.


Feature-wise, Restic looks like BorgBackup, but with native cloud storage support. Cool!

Unfortunately, it doesn't compress backup data at all, but deduplication would help with large binary files that it may not matter much in practice. It would depend on the type of data being backed up. I found several restore bugs in the issue queue, but it’s only 0.7 so it’s not like the developers claim it’s production ready yet.

  • Restore code produces inconsistent timestamps/permissions
  • Restore panics (on macOS)
  • Unable to backup/restore files/dirs with the same name

I plan on checking Restic out again once it hits 1.0 as a replacement for Duplicity.

Fatal Flaws

I found several contenders for backup that had one or more of my basic requirements missing. Here they are, in case your use case is different.


Backblaze’s encryption is not zero-knowledge. You have to give them your passphrase to restore. When you need to restore, they store your backup unencrypted on a server within a zip file.


Carbonite’s backup encryption is only supported for the Windows client. macOS backups are totally unencrypted!


CloudBerry was initially promising, but it only supports continuous backup in the Windows client. While it does support deduplication, it’s file level instead of block level.


iDrive file versions are very limited, with a maximum of 10 file versions for a file. In other words, expect that files being actively worked on over a week will lose old backups quickly. What’s the point of a backup system if I can’t recover a Word document from 2 weeks ago, simply because I’ve been editing it?


Rclone is rsync for cloud storage providers. Rclone is awesome - but not a backup tool on its own. When testing Duplicity, I used it to push my local test archives to Backblaze instead of starting backups from the beginning.


SpiderOak does not have a way to handle purging of historical revisions in a reliable manner. This HN post indicates poor support and slow speeds, so I skipped past further investigation.


Syncovery is a file sync solution that happens to do backup as well. That means it’s mostly focused on individual files, synced directly. It just feels too complex to be sure you have the backup setup right given the other features it has.

Syncovery is also file-based, and not block-based. For example, with Glacier as a target, you “cannot rename items which have been uploaded. When you rename or move files on the local side, they have to be uploaded again.”


I was intrigued by Sync as it’s one of the few Canadian providers in this space. However, they are really a sync tool that is marketed as a backup tool. It’s no better (or worse) than using a service like Dropbox for backups.


Tarsnap is $0.25 per GB per month. Using my Arq backup as a benchmark (since it’s deduplicated), my laptop backup alone would cost $12.50 a month. That cost is way beyond the competition.

Have you used any of these services or software I've talked about? What do you think about them or do you have any others you find useful?

Five new Cloudflare data centers across the United States

Cloudflare Blog -

When Cloudflare launched, three of the original five cities in our network - Chicago, Ashburn and San Jose - were located in the United States. Since then, we have grown the breadth of the global network considerably to span 66 countries, and even added expanded the US footprint to twenty five locations. Even as a highly international business, the United States continues to be home to a number of our customers and the majority of Cloudflare employees.

Today, we expand our network in the United States even further by adding five new locations: Houston (Texas), Indianapolis (Indiana), Montgomery (Alabama), Pittsburgh (Pennsylvania) and Sacramento (California) as our 129th, 130th, 131st, 132nd and 133rd data centers respectively. They represent states that collectively span nearly 100 million people. In North America alone, the Cloudflare network now spans 37 cities, including thirty in the US.

In each of these new locations, we connect with at least one major local Internet service provider and also openly peer using at least one major Internet exchange. We are participants at CyrusOne IX Houston, Midwest IX Indianapolis, Montgomery Internet Exchange, Pittsburgh IX, and the upcoming Sacramento IX.

These deployments improves performance, security and reliability for our customers, even while expanding the edge (and the compute capability it enables). In the not too distant future, we’d like to deploy at cell towers across major metro markets (and beyond!) to support the next generation of 5G-enabled applications.

With the launch of our next data center, Cloudflare will have deployments located in all of the ten most populous North American metropolitan areas.

The Cloudflare Global Anycast Network

This map reflects the network as of the publish date of this blog post. For the most up to date directory of locations please refer to our Network Map on the Cloudflare site.

Baghdad, Iraq: Cloudflare's 128th Data Center

Cloudflare Blog -

Cloudflare's newest data center is located in Baghdad, Iraq, in the region often known as the cradle of civilization. This expands our growing Middle East presence, while serving as our 45th data center in Asia, and 128th data center globally.

Even while accelerating over 7 million Internet properties, this deployment helps our effort to be closer to every Internet user. Previous, ISPs such as Earthlink were served from our Frankfurt data center. Nearly 40 million people live in Iraq.

Rich Cuisine

One of the world's largest producers of the sweet date palm, Iraq's cuisine dates back over 10,000 years and includes favorites such as,

  • Kleicha: Date-filled cookies flavored with cardamom, saffron and rose water
  • Mezza: a selection of appetizers to begin the meal
  • Iraqi Dolma: stuffed vegetables with a tangy sauce
  • Iraqi Biryani: cooked rice with spices, beans, grilled nuts and meat / vegetables
  • Masgouf: whole baked fish marinated in oil, salt, pepper, turmeric and tamarind
New data centers

Baghdad is the first of eight deployments joining the Cloudflare global network just this week. Stay tuned!

The Cloudflare Global Anycast Network

This map reflects the network as of the publish date of this blog post. For the most up to date directory of locations please refer to our Network Map on the Cloudflare site.

IBM Cloud, Now Powered by Cloudflare

Cloudflare Blog -

A Tale of Two New Relationships

Late last spring, we were seeking to expand our connections inside of IBM. IBM had first become a direct Cloudflare customer in 2016, when its X-force Exchange business selected Cloudflare, instead of traditional scrubbing center solutions, for DDoS protection, WAF, and Load Balancing. We had friendly relationships with several people inside of IBM’s Softlayer business. We learned that the IBM “Networking Tribe” was evaluating various solutions to fill product gaps that their cloud customers were experiencing for DDoS, DNS, WAF, and load balancing.

In trying to engage with the people leading the effort, I made a casual phone call late on a Friday afternoon to one of the IBMers based in Raleigh, NC. When he understood that I was from Cloudflare, he replied, “Oh, I know Cloudflare. You guys do DDoS protection, right?” I replied, “Well, yes, we do offer DDoS protection, but we also offer a number of other security and performance services.” He indicated that he would be in the Bay Area two weeks later, and that he would bring his team to our office if we could make the time.

Also late last spring, my wife delivered our baby boy. I was scheduled to be on paternity leave during the IBMers’ trip to the Bay Area. Although I was intently focused on practicing modern burping techniques, this seemed like an exciting enough opportunity that I was able to convince my family’s leadership team (aka my wife) that I should sneak into the office for a couple of hours while my son slept.

The Evaluation Process

Before their visit, the IBM team told us that they had evaluated a number of point solutions to fill the different product gaps, and that they had specific requirements for each type of solution. During our discussion, it became apparent that the Cloudflare network and architecture, which allows multiple security and performance services to be delivered easily and cost-effectively to end users around the world, could be a more effective approach than cobbling together point solutions. They wouldn’t have to integrate with multiple vendors, and their customers could achieve best-of-breed results while paying low, transparent prices.

Then the real work began, when the IBM team began to dig into our API documentation, run tests, and learn a lot more about our company. During the next five months, across multiple meetings in several cities, IBM became convinced that they should select Cloudflare to fill all of its product gaps. Cloudflare was ultimately selected for our trusted presence around the globe, scalable architecture, rich set of well-documented APIs, competitive and predictable pricing model, and commitment to empowering enterprise customers.

What We Are Launching

IBM’s new “Cloud Internet Services” offering, powered by Cloudflare, will enable IBM Cloud customers to easily purchase and configure mission-critical web performance and security solutions. These solutions solve critical security problems faced by enterprises, including DDoS mitigation, bot protection, and data theft protection as well as performance challenges, including ensuring application availability and accelerating internet applications and mobile experiences.

With Cloud Internet Services, IBM Cloud customers can get Cloudflare’s core DDoS, WAF, DNS, CDN, and related solutions through the IBM Cloud Dashboard with a few clicks. Certain Cloudflare services, typically available only in Cloudflare Enterprise Plans, can be purchased via the IBM sales team.

Click image to enlarge

Concurrent with this announcement, IBM becomes an authorized reseller of Cloudflare’s entire suite of services. Cloudflare’s solutions can be deployed in any IBM customer environment, including on-premise, hybrid cloud, or public cloud.

A Legacy of Innovation

Founded in 1911 as the Computing-Tabulating-Recording Company (CTR), IBM was renamed "International Business Machines" in 1924. Over the past century, IBM has set the standard for innovation among corporations in America. One of the world's largest employers, with more than 380,000 employees and operations in 170 countries, IBM employees have been awarded five Nobel Prizes, six Turing Awards, ten National Medals of Technology, and five National Medals of Science. IBM holds the record for most patents generated by a business (as of 2018) for 25 consecutive years. During the modern era, IBM became famous for the IBM mainframe, exemplified by the System/360, the dominant computing platform during the 1960s and 1970s. More recently, IBM’s AI platform “Watson” powers new consumer and enterprise services in the health care, financial services, retail, and education markets.

Joint Customer Benefits

Cloudflare is delighted to be working with IBM to expand our enterprise footprint. IBM has broad, deep experience and decades-long relationships with the world’s leading manufacturing, financial services, public sector, retail, and technology brands.

Enterprise customers will now be able to leverage and expand investments in IBM’s infrastructure offerings to realize faster, more secure web properties. As an example, enterprise customers who utilize IBM’s identity management services can invoke Cloudflare services through SSO, maintaining roles and access rights they are accustomed to.

Growing Out of Diapers

While still in its infancy, our partnership promises to bear fruit in multiple product and solution areas. Our shared vision includes harnessing the power of the Cloudflare data set. Integration with IBM QRadar, a flexible security analytics platform, is planned for release in the coming months. Longer-term, applications for end customers that leverage Watson for Cyber Security, can be developed.

All of this is possible as a result of Cloudflare’s cloud-native, API-first suite of network edge solutions. We couldn’t be more pleased about our opportunity to work together with IBM to deploy mission-critical security and performance edge services to enterprises around the world.

Interested in learning more about the partnership? Get in touch!

Pop-up Drupal Meetup TONIGHT at Lake Monster

Twin Cities Drupal Group -

Start:  2018-03-13 06:15 - 08:15 America/Chicago Organizers:  stpaultim mlncn Event type:  User group meeting

Social setting, but Drupal and Drupal-adjacent demos and questions solicited, with display / screen sharing done using the ancient, bespoke practice of looking at the same laptop screen.

Lake Monster Brewing featuring the Kabomlette food truck. (Or suggest

550 Vandalia St #160
St Paul, MN 55114

Technically it's in St. Paul, to keep this Twin Cities meetup honest, but practically in Minneapolis to make it easy for the plurality.

And in honor of occasional honorary Twin Cities Drupalista, Mauricio Dinarte (dinarcon), who came to the Midwest for Midcamp and traveled here but is almost immediately leaving our fair city for WordCamp and above freezing temperatures in Florida.

Everyone can now run JavaScript on Cloudflare with Workers

Cloudflare Blog -

Exactly one year ago today, Cloudflare gave me a mission: Make it so people can run code on Cloudflare's edge. At the time, we didn't yet know what that would mean. Would it be container-based? A new Turing-incomplete domain-specific language? Lua? "Functions"? There were lots of ideas.

Eventually, we settled on what now seems the obvious choice: JavaScript, using the standard Service Workers API, running in a new environment built on V8. Five months ago, we gave you a preview of what we were building, and started the beta.

Today, with thousands of scripts deployed and many billions of requests served, Cloudflare Workers is now ready for everyone.

"Moving away from VCL and adopting Cloudflare Workers will allow us to do some creative routing that will let us deliver JavaScript to npm's millions of users even faster than we do now. We will be building our next generation of services on Cloudflare's platform and we get to do it in JavaScript!"

— CJ Silverio, CTO, npm, Inc.

What is the Cloud, really?

Historically, web application code has been split between servers and browsers. Between them lies a vast but fundamentally dumb network which merely ferries data from point to point.

We don't believe this lives up to the promise of "The Cloud."

We believe the true dream of cloud computing is that your code lives in the network itself. Your code doesn't run in "us-west-4" or "South Central Asia (Mumbai)", it runs everywhere.

More concretely, it should run where it is most needed. When responding to a user in New Zealand, your code should run in New Zealand. When crunching data in your database, your code should run on the machines that store the data. When interacting with a third-party API, your code should run wherever that API is hosted. When human explorers reach Mars, they aren't going to be happy waiting a half an hour for your app to respond -- your code needs to be running on Mars.

Cloudflare Workers are our first step towards this vision. When you deploy a Worker, it is deployed to Cloudflare's entire edge network of over a hundred locations worldwide in under 30 seconds. Each request for your domain will be handled by your Worker at a Cloudflare location close to the end user, with no need for you to think about individual locations. The more locations we bring online, the more your code just "runs everywhere."

Well, OK… it won't run on Mars. Yet. You out there, Elon?

What's a Worker?

Cloudflare Workers derive their name from Web Workers, and more specifically Service Workers, the W3C standard API for scripts that run in the background in a web browser and intercept HTTP requests. Cloudflare Workers are written against the same standard API, but run on Cloudflare's servers, not in a browser.

Here are the tools you get to work with:

  • Execute any JavaScript code, using the latest standard language features.
  • Intercept and modify HTTP request and response URLs, status, headers, and body content.
  • Respond to requests directly from your Worker, or forward them elsewhere.
  • Send HTTP requests to third-party servers.
  • Send multiple requests, in serial or parallel, and use the responses to compose a final response to the original request.
  • Send asynchronous requests after the response has already been returned to the client (for example, for logging or analytics).
  • Control other Cloudflare features, such as caching behavior.

The possible uses for Workers are infinite, and we're excited to see what our customers come up with. Here are some ideas we've seen in the beta:

  • Route different types of requests to different origin servers.
  • Expand HTML templates on the edge, to reduce bandwidth costs at your origin.
  • Apply access control to cached content.
  • Redirect a fraction of users to a staging server.
  • Perform A/B testing between two entirely different back-ends.
  • Build "serverless" applications that rely entirely on web APIs.
  • Create custom security filters to block unwanted traffic unique to your app.
  • Rewrite requests to improve cache hit rate.
  • Implement custom load balancing and failover logic.
  • Apply quick fixes to your application without having to update your production servers.
  • Collect analytics without running code in the user's browser.
  • Much more.

Here's an example.

// A Worker which: // 1. Redirects visitors to the home page ("/") to a // country-specific page (e.g. "/US/"). // 2. Blocks hotlinks. // 3. Serves images directly from Google Cloud Storage. addEventListener('fetch', event => { event.respondWith(handle(event.request)) }) async function handle(request) { let url = new URL(request.url) if (url.pathname == "/") { // This is a request for the home page ("/"). // Redirect to country-specific path. // E.g. users in the US will be sent to "/US/". let country = request.headers.get("CF-IpCountry") url.pathname = "/" + country + "/" return Response.redirect(url, 302) } else if (url.pathname.startsWith("/images/")) { // This is a request for an image (under "/images"). // First, block third-party referrers to discourage // hotlinking. let referer = request.headers.get("Referer") if (referer && new URL(referer).hostname != url.hostname) { return new Response( "Hotlinking not allowed.", { status: 403 }) } // Hotlink check passed. Serve the image directly // from Google Cloud Storage, to save serving // costs. The image will be cached at Cloudflare's // edge according to its Cache-Control header. url.hostname = "example-bucket.storage.googleapis.com" return fetch(url, request) } else { // Regular request. Forward to origin server. return fetch(request) } }
It's Really Fast

Sometimes people ask us if JavaScript is "slow". Nothing could be further from the truth.

Workers uses the V8 JavaScript engine built by Google for Chrome. V8 is not only one of the fastest implementations of JavaScript, but one of the fastest implementations of any dynamically-typed language, period. Due to the immense amount of work that has gone into optimizing V8, it outperforms just about any popular server programming language with the possible exceptions of C/C++, Rust, and Go. (Incidentally, we will support those soon, via WebAssembly.)

The bottom line: A typical Worker script executes in less than one millisecond. Most users are unable to measure any latency difference when they enable Workers -- except, of course, when their worker actually improves latency by responding directly from the edge.

On another speed-related note, Workers deploy fast, too. Workers deploy globally in under 30 seconds from the time you save and enable the script.


Workers are a paid add-on to Cloudflare. We wanted to keep the pricing as simple as possible, so here's the deal:

Get Started

"Cloudflare Workers saves us a great deal of time. Managing bot traffic without Workers would consume valuable development and server resources that are better spent elsewhere."

— John Thompson, Senior System Administrator, MaxMind

Appnovation Technologies: Drupal 8 Top Ten: Where Features Meet Functionality

Planet Drupal -

Drupal 8 Top Ten: Where Features Meet Functionality Drupal 8 Top Ten: Where Features Meet Functionality Here at Appnovation, we live, love and breath all things Drupal, so it should come as no surprise that Drupal 8 is a big deal to us. With a cast list of over 3,000 contributors, Drupal 8 is a testament to the open source community, in which we have an active role, our dedication...

MidCamp - Midwest Drupal Camp: MidCamp 2018 is a wrap

Planet Drupal -

MidCamp 2018 is a wrap

MidCamp 2018 is in the books, and we couldn't have done it without all of you. Thanks to our trainers, trainees, volunteers, organizers, sprinters, venue hosts, sponsors, speakers, and of course, attendees for making this year's camp a success.

Videos are up

By the time you read this, we'll have 100% of the session's recordings from camp up on our YouTube Channel. Find all the sessions you missed, share your own session around, and spread the word. While you're there, check out our list of other camps who also have a huge video library to learn from.

Tell us what you thought

If you didn't fill it out during camp, please fill out our quick survey. We really value your feedback on any part of your camp experience, and our organizer team works hard to take as much of it as possible into account for next year.

Nextide Blog: Drupal Ember Basic App Refinements

Planet Drupal -

This is part 3 of our series on developing a Decoupled Drupal Client Application with Ember. If you haven't yet read the previous articles, it would be best to review Part1 first. In this article, we are going to clean up the code to remove the hard coded URL for the host, move the login form to a separate page and add a basic header and styling.

We currently have defined the host URL in both the adapter (app/adapters/application.js) for the Ember Data REST calls as well as the AJAX Service that we use for the authentication (app/services/ajax.js). This is clearly not a good idea but helped us focus on the initial goal and our simple working app.

Nextide Blog: Untapped areas for Business Improvements

Planet Drupal -

Many organization still struggle with the strain of manual processes that touch critical areas of the business. And these manual processes could be costlier that you think. It’s not just profit that may be slipping away but employee moral, innovation, competitiveness and so much more.

By automating routine tasks you can increase workflow efficiency, which in turn can free up staff for higher value work, driving down costs and boosting revenue. And it may be easier to achieve productivity gains simpler, faster, and with less risk that you may assume.

Most companies with manual work processes have been refining them for years, yet they may still not be efficient because they are not automated. So the question to ask is, “can I automate my current processes?”.

Nextide Blog: Maestro D8 Concepts Part 3: Logical Loopbacks & Regeneration

Planet Drupal -

This is part 3 of the Maestro for Drupal 8 blog series, defining and documenting the various aspects of the Maestro workflow engine.  Please see Part 1 for information on Maestro's Templates and Tasks, and Part 2 for the Maestro's workflow engine internals.  This post will help workflow administrators understand why Maestro for Drupal 8's validation engine warns about the potential for loopback conditions known as "Regeneration".

Nextide Blog: Maestro D8 Concepts Part 2: The Workflow Engine's Internals

Planet Drupal -

The Maestro Engine is the mechanism responsible for executing a workflow template by assigning tasks to actors, executing tasks for the engine and providing all of the other logic and glue functionality to run a workflow.  The maestro module is the core module in the Maestro ecosystem and is the module that houses the template, variable, assignment, queue and process schema.  The maestro module also provides the Maestro API for which developers can interact with the engine to do things such as setting/getting process variables, start processes, move the queue along among many other things.

As noted in the preamble for our Maestro D8 Concepts Part 1: Templates and Tasks post, there is jargon used within Maestro to define certain aspects of the engine and data.  The major terms are as follows:


Subscribe to Cruiskeen Consulting LLC aggregator - Drupal News