Feed aggregator

InternetDevels: Passwords and Drupal: some useful hints and cool modules

Planet Drupal -

As you may remember from the fairy-tales, knowing the secret words helps you to move even the mountains and open treasure caves. The words “Open, Sesame” from "Ali Baba and the Forty Thieves” work somewhat similarly to modern website passwords. However, making passwords work perfectly is a complex art, and it is one of the touchstones of Drupal website security.

Read more

ADCI Solutions: Manage your Drupal 8 site configurations

Planet Drupal -

Deployment and configuration management in Drupal 7: what do these words make you feel? Probably a lot of pain. Drupal 7 stores all configurations in database together with content.

But what does a good guy Drupal 8 do? Right you are: it provides a completely different way of managing configurations. It is based on the idea that almost all configurations can be stored in files rather than in a database. It allows developers to move settings between development and live sites easily.

In this article we’re going to develop a basic workflow which will help to keep your configurations synchronized between environments.

 

Discover what a Configuration management in Drupal 8 is.

 

Less Is More - Why The IPv6 Switch Is Missing

Cloudflare Blog -

At Cloudflare we believe in being good to the Internet and good to our customers. By moving on from the legacy world of IPv4-only to the modern-day world where IPv4 and IPv6 are treated equally, we believe we are doing exactly that.

"No matter what happens in life, be good to people. Being good to people is a wonderful legacy to leave behind." - Taylor Swift (whose website has been IPv6 enabled for many many years)

Starting today with free domains, IPv6 is no longer something you can toggle on and off, it’s always just on.

How we got here

Cloudflare has always been a gateway for visitors on IPv6 connections to access sites and applications hosted on legacy IPv4-only infrastructure. Connections to Cloudflare are terminated on either IP version and then proxied to the backend over whichever IP version the backend infrastructure can accept.

That means that a v6-only mobile phone (looking at you, T-Mobile users) can establish a clean path to any site or mobile app behind Cloudflare instead of doing an expensive 464XLAT protocol translation as part of the connection (shaving milliseconds and conserving very precious battery life).

That IPv6 gateway is set by a simple toggle that for a while now has been default-on. And to make up for the time lost before the toggle was default on, in August 2016 we went back retroactively and enabled IPv6 for those millions of domains that joined before IPv6 was the default. Over those next few months, we enabled IPv6 for nearly four million domains –– you can see Cloudflare’s dent in the IPv6 universe below –– and by the time we were done, 98.1% of all of our domains had IPv6 connectivity.

As an interim step, we added an extra feature –– when you turn off IPv6 in our dashboard, we remind you just how archaic we think that is.

With close to 100% IPv6 enablement, it no longer makes sense to offer an IPv6 toggle. Instead, Cloudflare is offering IPv6 always on, with no off-switch. We’re starting with free domains, and over time we’ll change the toggle on the rest of Cloudflare paid-plan domains.

The Future: How Cloudflare and OpenDNS are working together to make IPv6 even faster and more globally deployed

In November we published stats about the IPv6 usage we see on the Cloudflare network in an attempt to answer who and what is pushing IPv6. The top operating systems by percent IPv6 traffic are iOS, ChromeOS, and MacOS respectively. These operating systems push significantly more IPv6 traffic than their peers because they use a routing choice algorithm called Happy Eyeballs. Happy Eyeballs opportunistically chooses IPv6 when available by doing two DNS lookups –– one for an IPv6 address (this IPv6 address is stored in the DNS AAAA record - pronounced quad-A) and then one for the IPv4 address (stored in the DNS A record). Both DNS queries are flying over the Internet at the same time and the client chooses the address that comes back first. The client even gives IPv6 a few milliseconds head start (iOS and MacOS give IPv6 lookups a 25ms head start for example) so that IPv6 may be chosen more often. This works and has fueled some of IPv6’s growth. But it has fallen short of the goal of a 100% IPv6 world.

While there are perfectly good historical reasons why IPv6 and IPv4 addresses are stored in separate DNS types, today clients are IP version agnostic and it no longer makes sense for it to require two separate round trips to learn what addresses are available to fetch a resource from.

Alongside OpenDNS, we are testing a new idea - what if you could ask for all the addresses in just one DNS query?

With OpenDNS, we are prototyping and testing just that –– a new DNS metatype that returns all available addresses in one DNS answer –– A records and AAAA records in one response. (A metatype is a query type in DNS that end users can’t add into their DNS zone file, it’s assembled dynamically by the authoritative nameserver.)

What this means is that in the future if a client like an iPhone wants to access a mobile app that uses Cloudflare DNS or using another DNS provider that supports the spec, the iPhone DNS client would only need to do one DNS lookup to find where the app’s API server is located, cutting the number of necessary round trips in half.

This reduces the amount of bandwidth on the DNS system, and pre-populates global DNS caches with IPv6 addresses, making IPv6 lookups faster in the future, with the side benefit that Happy Eyeballs clients prefer IPv6 when they can get the address quickly, which increases the amount of IPv6 traffic that flows through the Internet.

We have the metaquery working in code with the reserved TYPE65535 querytype. You can ask a Cloudflare nameserver for TYPE65535 of any domain on Cloudflare and get back all available addresses for that name.

$ dig cloudflare.com @ns1.cloudflare.com -t TYPE65535 +short 198.41.215.162 198.41.214.162 2400:cb00:2048:1::c629:d6a2 2400:cb00:2048:1::c629:d7a2 $

Did we mention Taylor Swift earlier?

$ dig taylorswift.com @ns1.cloudflare.com -t TYPE65535 +short 104.16.193.61 104.16.194.61 104.16.191.61 104.16.192.61 104.16.195.61 2400:cb00:2048:1::6810:c33d 2400:cb00:2048:1::6810:c13d 2400:cb00:2048:1::6810:bf3d 2400:cb00:2048:1::6810:c23d 2400:cb00:2048:1::6810:c03d $

We believe in proving concepts in code and through the IETF standards process. We’re currently working on an experiment with OpenDNS and will translate our learnings to an Internet Draft we will submit to the IETF to become an RFC. We’re sure this is just the beginning to faster, better deployed IPv6.

Patent Troll Battle Update: Doubling Down on Project Jengo

Cloudflare Blog -


Jengo Fett by Brickset (Flickr)

We knew the case against patent trolls was the right one, but we have been overwhelmed by the response to our blog posts on patent trolls and our program for finding prior art on the patents held by Blackbird Tech, which we’ve dubbed Project Jengo. As we discuss in this post, your comments and contributions have allowed us to expand and intensify our efforts to challenge the growing threat that patent trolls pose to innovative tech companies.

We’re SIGNIFICANTLY expanding our program to find prior art on the Blackbird Tech patents

In a little over a week since we started the program, we’ve received 141 separate prior art submissions. But we know there’s an opportunity to find a lot more.

We’ve been impressed with the exceptionally high quality of the submissions. The Cloudflare community of users and readers of our blog are an accomplished bunch, so we have a number of searches that were done by expert engineers and programmers. In one case that stood out to us, someone wrote in about a project they personally had worked on as an engineer back in 1993, which they are convinced is conclusive prior art to a Blackbird Tech patent. We will continue to collect and review these submissions.

The submissions so far relate to 18 of the 38 Blackbird Tech patents and applications. You can see a summary of the number of submissions per patent here (PDF). You'll see there are still 20 Blackbird Tech patents and applications we’ve yet to receive a submission for.

We’re looking for prior art on 100% of the Blackbird Tech patents. If you are interested in helping, take some time to look into those patents where we don’t have anything yet. We’ll update the chart as we review the submissions with additional information about the number we receive, and their quality, to help focus the search. After the initial review, we’ll start to color code the patents (i.e., red/yellow/green) to demonstrate the number and quality of submissions we’ve received on each patent.

An anonymous benefactor donated another $50K to help invalidate all of Blackbird Tech's patents

And our efforts to cover the field have been re-doubled. We’re excited to report that a friend in the industry who read our blog post and shares our concerns about the corrosive impact of patent trolls has made an anonymous donation of $50,000 to support our efforts to invalidate the Blackbird Tech patents. That means that we are now committing at least $100,000 to the effort to find prior art on and initiate actions to invalidate the Blackbird Tech patents.

We initially dedicated a $50,000 bounty to invalidate Blackbird Tech's patents. We split the bounty so $20,000 was to invalidate the particular patent Blackbird Tech sued us on and $30,000 was to help invalidate any other Blackbird Tech patent. We've received so many prior art submissions on the patent in question in Cloudflare's case that we don't believe we need an additional incentive there. Instead, we're dedicating 100% of the anonymously donated $50,000 to invalidating the other Blackbird Tech patents. This will be used both to boost the bounty we pay to researchers as well as to fund invalidation cases we file with the USPTO. Our goal remains invalidating every one of Blackbird Tech's patents. Again if you want more information about how you can participate, you can find the description here.

And, of course, there will be t-shirts!

And it wouldn’t be a cooperative effort in the tech community if we didn’t give out T-shirts to commemorate your participation in the process. You can see the T-shirt design above, all you have to do is provide a legitimate entry of prior art on any of the Blackbird Tech patents and we’ll send one to you (limit one shirt per participant).

Blackbird Tech’s “new model” of patent litigation may be a violation of professional ethics, soon it may also be an explicit violation of law

We think the business operations of the Blackbird Tech attorneys may violate the Rules of Professional Conduct in both Illinois and Massachusetts, where Blackbird Tech’s offices are located and where its co-founders work, and we have asked ethics regulators in those states to undertake a review. But we think it’s worth going a step further and working with innovation-supporting legislators in the states where Blackbird Tech operates to make it absolutely clear this new breed of patent troll is not welcome.

As we mentioned in the original blog post, there have already been several proposals at both the state and federal level to push back and limit the ability of patent trolls to use the courts to bring cases against successful companies. Yet Blackbird Tech is pushing in the other direction and attempting to come up with novel ways to increase the efficiency and effectiveness of patent trolls.

On May 23, 2017, Rep. Keith Wheeler of Illinois introduced a bill (the “Ethics in Patent Litigation Act”) that would make it the public policy of the State of Illinois that attorneys in the state, like Blackbird co-founder Chris Freeman (LinkedIn), should not be able to buy patents themselves for the purpose of suing on them if they are not in the business of any other productive activity. We appreciate Rep. Wheeler’s support of innovation and his stance against patent trolls, feel free to show your support via Twitter below.

In Massachusetts, where Blackbird's other attorney co-founder, Wendy Verlander (@bbirdtech_CEO; LinkedIn) is based, Sen. Eric Lesser has specifically targeted patent trolls in a bill he introduced earlier this year.

Well done. It's time to stand up to patent trolls. We have a bill in @MA_Senate that will do just that. @ScottKirsner @jonchesto @epaley https://t.co/O2hHB1R3DT

— Eric Lesser (@EricLesser) May 19, 2017

You can show your support for Sen. Lesser’s stance on these issues using the Twitter generator below. We will be working with Sen. Lesser in the weeks and months ahead to address our concern about Blackbird Tech’s “new model” of patent troll.

Even though the patent system may be based on Federal law, states have the ability to set rules for how businesses, and especially lawyers, behave in their jurisdictions. So we’re happy to work with interested lawmakers in other states, including Delaware, to advance new laws that limit the practices of patent trolls, including Blackbird Tech’s “new model.” We can share the information we’ve learned and pull together model legislation. If you are interested or know a legislator who may be, feel free to email us.

Blackbird Tech calls themselves “very much the same” as and “almost identical” to a law firm when it suits their purposes, and “not a law firm” when it doesn’t

As we wrote before, we believe Blackbird Tech's dangerous new model of patent trolling — where they buy patents and then act their own attorneys in cases — may be a violation of the rules of professional ethics. In particular, we are concerned that they may be splitting fees with non-attorneys and that they may be acquiring causes of action. Both practices run counter to the rules of professional ethics for lawyers and law firms.

It is increasingly clear to us that Blackbird’s response to questions about their compliance with the rules of professional conduct will be, at best, based on simple agreements that merely create a shortcut around their ethical obligations, and at worst, directly contradictory.

Blackbird Tech wants to have it both ways. In response to the original blog post, Blackbird Tech denied both that it was a law firm and that it used contingency fee agreements. Specifically:

In a phone conversation with Fortune, Blackbird CEO Wendy Verlander said the company is not a law firm and that it doesn't use contingency fee arrangements for the patents it buys, but conceded "it's a similar arrangement."

Ms. Verlander objects to being characterized as a law firm because if Blackbird is found to be one then their practices would be governed by, and may be a violation of, the rules of professional ethics. Ms. Verlander’s denial that Blackbird Tech doesn’t use contingency agreements, only to quickly concede that what they do is “a similar arrangement” suggests again that Blackbird Tech is finding it convenient to work around the ethical rules.

This runs fundamentally counter to the concept of ethical rules, which are meant to be driven by the spirit of those obligations. Anyone out to intentionally “cut corners” or do the “bare minimum” to comply with only the letter of such obligations are by default in violation of the “special responsibilities” which should be driven by “personal conscience” as described in the preamble of the ABA Model Rules.

And Ms. Verlander’s unequivocal assertion that Blackbird Tech is not a law firm can be contrasted with sworn statements submitted by Blackbird Tech attorneys to courts last May asserting how much they operate like a law firm. In Blackbird Tech v. Service Lighting and Electrical Supplies, Blackbird Tech CEO Wendy Verlander, Blackbird Tech co-founder Chris Freeman, and Blackbird Tech employee Sean Thompson, each filed declarations in opposition to a proposed protective order.

Protective orders are important in patent litigation. Often, discovery in those cases involves companies handing over highly confidential information about their most important trade secrets or the history of how they developed valuable intellectual property. In most cases, courts limit access to such materials only to outside counsel, as opposed to the parties’ employees and in-house counsel. In-house counsel generally serve a number of functions at a business that include competitive decision-making, either directly or indirectly. Because in-house counsel may benefit from the additional perspective and insight gained by exposure to sensitive trade secrets of a competitor, and are unable to simply wipe their memories clean, courts in patent litigation cases often limit their review of particularly sensitive documents. In such cases, documents classified as “HIGHLY CONFIDENTIAL—ATTORNEY EYES ONLY” are limited to review by outside counsel, who are less likely to face the same sort of business decisions in the future.

When it served their purposes in opposition to a proposed protective order, the Blackbird Tech attorneys were quick to point out how much they operated only like a law firm and distance themselves from their business roles. Their sworn declarations specifically asserted:

  • “Although the structure of Blackbird is unique, the realities of patent litigation at Blackbird are very much the same as patent litigation on behalf of clients at law firms.” (Verlander at ¶13, Freeman at ¶14)

  • “Thus, in many ways, my role at Blackbird as a member of the Litigation Group is identical to my previous role as outside counsel at a law firm.” (Verlander at ¶13, Freeman at ¶14)(emphasis added)

  • “Blackbird’s Litigation Group operates almost identically to outside law firm counsel. Blackbird’s litigators are presented with patents and possible infringers, just as clients bring to law firms. The Blackbird litigators then bring their litigation expertise to bear and thoroughly analyze the patent and the potential infringement case, ultimately deciding whether to move forward with litigation — just as a law firm would evaluate a case. If the Blackbird litigation team identifies a strong infringement case, the litigators draft Complaints and conduct litigation, acting in the same role as outside counsel.” (Verlander at ¶14, Freeman at ¶15)(emphasis added).

  • “On a day-to-day basis, what I do at Blackbird is the same as what I did when practicing at a firm.” (Thompson at ¶2).

This inconsistency points out once again how Blackbird is attempting to gain an advantage by turning traditional roles on their head. If they were a typical company, that was looking to make products using the patents they own, then we’d be able to seek discovery on their products and operations. Instead, they function as a law firm with no business operations that would be subject to the same sort of scrutiny they will apply to a company like Cloudflare.

And they say that they’re not a law firm, yet they expect all their employees, including their CEO, to be permitted to exercise the special role of an attorney “identical to [their] previous role as outside counsel at a law firm.” But it would be difficult for them to deny that their employees, including their CEO, are engaged in impermissible attorney practices like buying causes of actions and giving a financial interest in litigation to non-parties, which are clearly not “identical” to what they would have done “as outside counsel at a firm.” They can’t have it both ways.

Coverage of the blog post took our arguments even further

In our previous blog posts on patent trolls, we thought we’d said about everything there was to say, or at least exhausted anyone who might have something else to say. But we found that most of the reports about our efforts did much more than merely parrot our statements and ask Blackbird Tech for a response. These reports raise some excellent additional points that we expect to use in our ongoing efforts to defend the case brought by Blackbird Tech.

Several of the reporters noted that Blackbird Tech’s claims seem a bit farfetched and found their own factual basis for contesting those claims. Joe Mullin (@joemullin) at Ars Technica noted that the Blackbird Tech patent—particularly in the overbroad way it is applying it in the case against Cloudflare—has prior art that dates back to the beginning of the last century:

The suggestion that intercepting and modifying electronic communications is a 1998 “invention” is a preposterous one. By World War I, numerous state governments had systems in place to censor and edit telegraph and telephone conversations.

Similarly, Shaun Nichols (@shaundnichols) of the Register notes that the differences between the Blackbird Tech patent and our operations are “remarkable”:

In our view, from a quick read of the documentation, Blackbird's design sounds remarkably different to Cloudflare's approach. Critically, the server-side includes described in the patent have been around well before the patent was filed: Apache, for example, had them as early as 1996, meaning the design may be derailed by prior art.

And beyond the legal arguments in the patent case, Techdirt felt that our arguments questioning the operations of Blackbird Tech itself sounded strikingly familiar to another operation that was found to be legally improper:

Righthaven. As you may recall, that was a copyright trolling operation that effectively "bought" the bare right to sue from newspapers. They pretended they bought the copyright (since you can't just buy a right to sue), but the transfer agreement left all the actual power with the newspapers, and courts eventually realized that all Righthaven really obtained was the right to sue. That resulted in the collapse of Righthaven. This isn't exactly analogous, but there are some clear similarities, in having a "company," rather than a law firm (but still run completely by lawyers), "purchase" patents or copyrights solely for the purpose of suing, while setting up arrangements to share the proceeds with the previous holder of those copyrights or patents. It's a pretty sleazy business no matter what — and with Righthaven it proved to be its undoing. Blackbird may face a similar challenge.

It’s probably best to close this post with a statement from Mike Masnick (@mmasnick) of Techdirt that we may save for a closing argument down the road because it summarized the situation better than we had:

Kudos to Cloudflare for hitting back against patent trolling that serves no purpose whatsoever, other than to shake down innovative companies and stifle their services. But, really, the true travesty here is that the company needs to do this at all. Our patent (and copyright) systems seem almost perfectly designed for this kind of shakedown game, having nothing whatsoever to do with the stated purpose of supporting actual innovators and creators. Instead, it's become a paper game abused by lawyers to enrich themselves at the expense of actual innovators and creators.

We will keep you updated. In the meantime, you can contribute to our efforts by continuing to participate in the search for prior art on the Blackbird Tech patents, or you can engage in the political process by supporting efforts to change the patent litigation process. And support folks like Rep. Wheeler or Sen. Lesser with their proposals to limit the power of patent trolls.

twitterwidget { width: 100% !important; } var twitterInterval = setInterval(function(){ var widget = document.querySelector('twitterwidget'); if (!widget) return; var embedded = widget.shadowRoot.querySelector('.EmbeddedTweet'); if (!embedded) return; clearInterval(twitterInterval); embedded.style.maxWidth = 'none'; }, 100);

Agiledrop.com Blog: AGILEDROP: DrupalCon sessions about Drupal Showcase

Planet Drupal -

Last time, we gathered together DrupalCon Baltimore sessions about Coding and Development. Before that, we explored the area of Project Management and Case Studies. And that was not our last stop. This time, we looked at sessions that were presented in the area of Drupal Showcase. Ain’t No Body: Not Your Mama’s Headless Drupal by Paul Day from Quotient, Inc. This session explores disembodied Drupal, also known as bodiless Drupal- an application that uses Drupal’s powerful framework to do things it does well while storing the actual domain data in a remote repository. Moreover, it explores… READ MORE

Kristian Polso: Blockchain, GDPR, Migrate API... Videos from DrupalCamp Nordics 2017 are live!

Planet Drupal -

I have just finished editing the session videos from the very first DrupalCamp Nordics. DrupalCamp Nordics 2017 was held in Helsinki, on 11th to 12th of May 2017. The event was a great success, with over 120 participants from more than 10 different countries! The topics of the sessions ranged from more high-level technology-related like Blockchain and GDPR to more practical developer-oriented matters like using the Migrate API and introduction to Drupal 8 caching.

myDropWizard.com: Drupal 6 security update for Site Verify

Planet Drupal -

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Moderately Critical security release for the Site Verify module to fix an Cross Site Scripting (XSS) vulnerability.

The Site Verify module enables privilege users to verify a site with services like Google Webmaster Tools using meta tags or file uploads.

The module doesn't sufficiently sanitize input or restrict uploads.

See the security advisory for Drupal 7 for more information.

Here you can download the Drupal 6 patch.

If you have a Drupal 6 site using the Site Verify module, we recommend you update immediately.

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on Drupal.org).

Reflections on reflection (attacks)

Cloudflare Blog -

Recently Akamai published an article about CLDAP reflection attacks. This got us thinking. We saw attacks from Conectionless LDAP servers back in November 2016 but totally ignored them because our systems were automatically dropping the attack traffic without any impact.

CC BY 2.0 image by RageZ

We decided to take a second look through our logs and share some statistics about reflection attacks we see regularly. In this blog post, I'll describe popular reflection attacks, explain how to defend against them and why Cloudflare and our customers are immune to most of them.

A recipe for reflection

Let's start with a brief reminder on how reflection attacks (often called "amplification attacks") work.

To bake a reflection attack, the villain needs four ingredients:

  • A server capable of performing IP address spoofing.
  • A protocol vulnerable to reflection/amplification. Any badly designed UDP-based request-response protocol will do.
  • A list of "reflectors": servers that support the vulnerable protocol.
  • A victim IP address.

The general idea:

  • The villain sends fake UDP requests.
  • The source IP address in these packets is spoofed: the attacker sticks the victim's IP address in the source IP address field, not their own IP address as they normally would.
  • Each packet is destined to a random reflector server.
  • The spoofed packets traverse the Internet and eventually are delivered to the reflector server.
  • The reflector server receives the fake packet. It looks at it carefully and thinks: "Oh, what a nice request from the victim! I must be polite and respond!". It sends the response in good faith.
  • The response, though, is directed to the victim.

The victim will end up receiving a large volume of response packets it never had requested. With a large enough attack the victim may end up with congested network and an interrupt storm.

The responses delivered to victim might be larger than the spoofed requests (hence amplification). A carefully mounted attack may amplify the villain's traffic. In the past we've documented a 300Gbps attack generated with an estimated 27Gbps of spoofing capacity.

Popular reflections

During the last six months our DDoS mitigation system "Gatebot" detected 6,329 simple reflection attacks (that's one every 40 minutes). Here is the list by popularity of different attack vectors. An attack is defined as a large flood of packets identified by a tuple: (Protocol, Source Port, Target IP). Basically - a flood of packets with the same source port to a single target. This notation is pretty accurate - during normal Cloudflare operation, incoming packets rarely share a source port number!

Count Proto Src port 3774 udp 123 NTP 1692 udp 1900 SSDP 438 udp 0 IP fragmentation 253 udp 53 DNS 42 udp 27015 SRCDS 20 udp 19 Chargen 19 udp 20800 Call Of Duty 16 udp 161 SNMP 12 udp 389 CLDAP 11 udp 111 Sunrpc 10 udp 137 Netbios 6 tcp 80 HTTP 5 udp 27005 SRCDS 2 udp 520 RIP Source port 123/udp NTP

By far the most popular reflection attack vector remains NTP. We have blogged about NTP in the past:

Over the last six months we've seen 3,374 unique NTP amplification attacks. Most of them were short. The average attack duration was 11 minutes, with the longest lasting 22 hours (1,300 minutes). Here's a histogram showing the distribution of NTP attack duration:

Minutes min:1.00 avg:10.51 max:1297.00 dev:35.02 count:3774 Minutes: value |-------------------------------------------------- count 0 | 2 1 | * 53 2 | ************************* 942 4 |************************************************** 1848 8 | *************** 580 16 | ***** 221 32 | * 72 64 | 35 128 | 11 256 | 7 512 | 2 1024 | 1

Most of the attacks used a small number of reflectors - we've recorded an average of 1.5k unique IPs per attack. The largest attack used an estimated 12.3k reflector servers.

Unique IPs min:5.00 avg:1552.84 max:12338.00 dev:1416.03 count:3774 Unique IPs: value |-------------------------------------------------- count 16 | 0 32 | 1 64 | 8 128 | ***** 111 256 | ************************* 553 512 | ************************************************* 1084 1024 |************************************************** 1093 2048 | ******************************* 685 4096 | ********** 220 8192 | 13

The peak attack bandwidth was on average 5.76Gbps and max of 64Gbps:

Peak bandwidth in Gbps min:0.06 avg:5.76 max:64.41 dev:6.39 count:3774 Peak bandwidth in Gbps: value |-------------------------------------------------- count 0 | ****** 187 1 | ********************* 603 2 |************************************************** 1388 4 | ***************************** 818 8 | ****************** 526 16 | ******* 212 32 | * 39 64 | 1

This stacked chart shows the geographical distribution of the largest NTP attack we've seen in the last six months. You can see the packets per second number directed to each datacenter. One our datacenters (San Jose to be precise) received about a third of the total attack volume, while the remaining packets were distributed roughly evenly across other datacenters.

The attack lasted 20 minutes, used 527 reflector NTP servers and generated about 20Mpps / 64Gbps at peak.

Dividing these numbers we can estimate that a single packet in that attack had on average size of 400 bytes. In fact, in NTP attacks the great majority of packets have a length of precisely 468 bytes (less often 516). Here's a snippet from tcpdump:

$ tcpdump -n -r 3164b6fac836774c.pcap -v -c 5 -K 11:38:06.075262 IP -(tos 0x20, ttl 60, id 0, offset 0, proto UDP (17), length 468) 216.152.174.70.123 > x.x.x.x.47787: [|ntp] 11:38:06.077141 IP -(tos 0x0, ttl 56, id 0, offset 0, proto UDP (17), length 468) 190.151.163.1.123 > x.x.x.x.44540: [|ntp] 11:38:06.082631 IP -(tos 0xc0, ttl 60, id 0, offset 0, proto UDP (17), length 468) 69.57.241.60.123 > x.x.x.x.47787: [|ntp] 11:38:06.095971 IP -(tos 0x0, ttl 60, id 0, offset 0, proto UDP (17), length 468) 126.219.94.77.123 > x.x.x.x.21784: [|ntp] 11:38:06.113935 IP -(tos 0x0, ttl 59, id 0, offset 0, proto UDP (17), length 516) 69.57.241.60.123 > x.x.x.x.9285: [|ntp] Source port 1900/udp SSDP

The second most popular reflection attack was SSDP, with a count of 1,692 unique events. These attacks were using much larger fleets of reflector servers. On average we've seen around 100k reflectors used in each attack, with the largest attack using 1.23M reflector IPs. Here's the histogram of number of unique IPs used in SSDP attacks:

Unique IPs min:15.00 avg:98272.02 max:1234617.00 dev:162699.90 count:1691 Unique IPs: value |-------------------------------------------------- count 256 | 0 512 | 4 1024 | **************** 98 2048 | ************************ 152 4096 | ***************************** 178 8192 | ************************* 158 16384 | **************************** 176 32768 | *************************************** 243 65536 |************************************************** 306 131072 | ************************************ 225 262144 | *************** 95 524288 | ******* 47 1048576 | * 7

The attacks were also longer, with 24 minutes average duration:

$ cat 1900-minutes| ~/bin/mmhistogram -t "Minutes" Minutes min:2.00 avg:23.69 max:1139.00 dev:57.65 count:1692 Minutes: value |-------------------------------------------------- count 0 | 0 1 | 10 2 | ***************** 188 4 | ******************************** 354 8 |************************************************** 544 16 | ******************************* 342 32 | *************** 168 64 | **** 48 128 | * 19 256 | * 16 512 | 1 1024 | 2

Interestingly the bandwidth doesn't follow a normal distribution. The average SSDP attack was 12Gbps and the largest just shy of 80Gbps:

$ cat 1900-Gbps| ~/bin/mmhistogram -t "Bandwidth in Gbps" Bandwidth in Gbps min:0.41 avg:11.95 max:78.03 dev:13.32 count:1692 Bandwidth in Gbps: value |-------------------------------------------------- count 0 | ******************************* 331 1 | ********************* 232 2 | ********************** 235 4 | *************** 165 8 | ****** 65 16 |************************************************** 533 32 | *********** 118 64 | * 13

Let's take a closer look at the largest (80Gbps) attack we've recorded. Here's a stacked chart showing packets per second going to each datacenter. This attack was using 940k reflector IPs, generated 30Mpps. The datacenters receiving the largest proportion of the traffic were San Jose, Los Angeles and Moscow.

The average packet size was 300 bytes. Here's how the attack looked on the wire:

$ tcpdump -n -r 4ca985a2211f8c88.pcap -K -c 7 10:24:34.030339 IP - 219.121.108.27.1900 > x.x.x.x.25255: UDP, length 301 10:24:34.406943 IP - 208.102.119.37.1900 > x.x.x.x.37081: UDP, length 331 10:24:34.454707 IP - 82.190.96.126.1900 > x.x.x.x.25255: UDP, length 299 10:24:34.460455 IP - 77.49.122.27.1900 > x.x.x.x.25255: UDP, length 289 10:24:34.491559 IP - 212.171.247.139.1900 > x.x.x.x.25255: UDP, length 323 10:24:34.494385 IP - 111.1.86.109.1900 > x.x.x.x.37081: UDP, length 320 10:24:34.495474 IP - 112.2.47.110.1900 > x.x.x.x.37081: UDP, length 288 Source port 0/udp IP fragmentation

Sometimes we see reflection attacks showing UDP source and destination port numbers set to zero. This is usually a side effect of attacks where the reflecting servers responded with large fragmented packets. Only the first IP fragment contains a UDP header, preventing subsequent fragments from being reported properly. From a router point of view this looks like a UDP packet without UDP header. A confused router reports a packet from source port 0, going to port 0!

This is a tcpdump-like view:

$ tcpdump -n -r 4651d0ec9e6fdc8e.pcap -c 8 02:05:03.408800 IP - 190.88.35.82.0 > x.x.x.x.0: UDP, length 1167 02:05:03.522186 IP - 95.111.126.202.0 > x.x.x.x.0: UDP, length 1448 02:05:03.525476 IP - 78.90.250.3.0 > x.x.x.x.0: UDP, length 839 02:05:03.550516 IP - 203.247.133.133.0 > x.x.x.x.0: UDP, length 1472 02:05:03.571970 IP - 54.158.14.127.0 > x.x.x.x.0: UDP, length 1328 02:05:03.734834 IP - 1.21.56.71.0 > x.x.x.x.0: UDP, length 1250 02:05:03.745220 IP - 195.4.131.174.0 > x.x.x.x.0: UDP, length 1472 02:05:03.766862 IP - 157.7.137.101.0 > x.x.x.x.0: UDP, length 1122

An avid reader will notice - the source IPs above are open DNS resolvers! Indeed, from our experience most of the attacks categorized as fragmentation are actually a side effect of DNS amplifications.

Source port 53/udp DNS

Over the last six months we've seen 253 DNS amplifications. On average an attack used 7100 DNS reflector servers and lasted 24 minutes. Average bandwidth was around 3.4Gbps with largest attack using 12Gbps.

This is a simplification though. As mentioned above multiple DNS attacks were registered by our systems as two distinct vectors. One was categorized as source port 53, and another as source port 0. This happened when the DNS server flooded us with DNS responses larger than max packet size, usually about 1,460 bytes. It's easy to see if that was the case by inspecting the DNS attack packet lengths. Here's an example:

DNS attack packet lengths min:44.00 avg:1458.94 max:1500.00 dev:208.14 count:40000 DNS attack packet lengths: value |-------------------------------------------------- count 8 | 0 16 | 0 32 | 129 64 | 479 128 | 84 256 | 164 512 | 268 1024 |************************************************** 38876

The great majority of the received DNS packets were indeed close to the max packet size. This suggests the DNS responses were large and were split into multiple fragmented packets. Let's see the packet size distribution for accompanying source port 0 attack:

$ tcpdump -n -r 4651d0ec9e6fdc8e.pcap \ | grep length \ | sed -s 's#.*length \([0-9]\+\).*#\1#g' \ | ~/bin/mmhistogram -t "Port 0 packet length" -l -b 100 Port 0 packet length min:0.00 avg:1264.81 max:1472.00 dev:228.08 count:40000 Port 0 packet length: value |-------------------------------------------------- count 0 | 348 100 | 7 200 | 17 300 | 11 400 | 17 500 | 56 600 | 3 700 | ** 919 800 | * 520 900 | * 400 1000 | ******** 3083 1100 | ************************************ 12986 1200 | ***** 1791 1300 | ***** 2057 1400 |************************************************** 17785

About half of the fragments were large, close to the max packet length in size, and rest were just shy of 1,200 bytes. This makes sense: a typical max DNS response is capped at 4,096 bytes. 4,096 bytes would be seen on the wire as one DNS packet fragment with an IP header, one max length packet fragment and one fragment of around 1,100 bytes:

4,096 = 1,460+1,460+1,060

For the record, the particular attack illustrated here used about 17k reflector server IPs, lasted 64 minutes, generated about 6Gbps on the source port 53 strand and 11Gbps of source port 0 fragments.

We have blogged about DNS reflection attacks in the past:

Other protocols

We've seen amplification using other protocols such as:

  • port 19 - Chargen
  • port 27015 - SRCDS
  • port 20800 - Call Of Duty

...and many other obscure protocols. These attacks were usually small and not notable. We didn't see enough of then to provide meaningful statistics but the attacks were automatically mitigated.

Poor observability

Unfortunately we're not able to report on the contents of the attack traffic. This is notable for the NTP and DNS amplifications - without case by case investigations we can't report what responses were actually being delivered to us.

This is because all these attacks stopped at the network layer. Routers are heavily optimized to perform packet forwarding and have a limited capacity of extracting raw packets. Basically there is no "tcpdump" there.

We track these attacks with netflow, and we observe them hit our routers firewall. The tcpdump snippets shown above were actually fake, reconstructed artificially from netflow data.

Trivial to mitigate

With properly configured firewall and sufficient network capacity (which isn't always easy to come by unless you are the size of Cloudflare) it's trivial to block the reflection attacks. But note that we've seen reflection attacks up to 80Gbps so you do need sufficient capacity.

Properly configuring a firewall is not rocket science: default DROP can get you quite far. In other cases you might want to configure rate limiting rules. This is a snippet from our JunOS config:

term RATELIMIT-SSDP-UPNP { from { destination-prefix-list { ANYCAST; } next-header udp; source-port 1900; } then { policer SA-POLICER; count ACCEPT-SSDP-UPNP; next term; } }

But properly configuring firewall requires some Internet hygiene. You should avoid using the same IP for inbound and outbound traffic. For example, filtering a potential NTP DDoS will be harder if you can't just block inbound port 123 indiscriminately. If your server requires NTP, make sure it exits to the Internet over non-server IP address!

Capacity game

While having sufficient network capacity is necessary, you don't need to be a Tier 1 to survive amplification DDoS. The median attack size we've received was just 3.35Gbps, average 7Gbps, Only 195 attacks out of 6,353 attacks recorded - 3% - were larger than 30Gbps.

All attacks in Gbps: min:0.04 avg:7.07 med:3.35 max:78.03 dev:9.06 count:6329 All attacks in Gbps: value |-------------------------------------------------- count 0 | **************** 658 1 | ************************* 1012 2 |************************************************** 1947 4 | ****************************** 1176 8 | **************** 641 16 | ******************* 748 32 | **** 157 64 | 14

But not all Cloudflare datacenters have equal sized network connections to the Internet. So how can we manage?

Cloudflare was architected to withstand large attacks. We are able to spread the traffic on two layers:

  • Our public network uses Anycast. For certain attack types - like amplification - this allows us to split the attack across multiple datacenters avoiding a single choke point.
  • Additionally we use ECMP internally to spread a traffic destined to single IP address across multiple physical servers.

In the examples above, I showed a couple of amplification attacks getting nicely distributed across dozens of datacenters across the globe. In the shown attacks, if our router firewall failed, our physical servers wouldn't receive more than 500kpps of attack data. A well tuned iptables firewall should be able to cope with such a volume without a special kernel offload help.

Inter-AS Flowspec for the rest

Withstanding reflection attacks requires sufficient network capacity. Internet citizens not having fat network cables should use a good Internet Service Provider supporting flowspec.

Flowspec can be thought of as a protocol enabling firewall rules to be transmitted over a BGP session. In theory flowspec allows BGP routers on different Autonomous Systems to share firewall rules. The rule can be set up on the attacked router and distributed to the ISP network with the BGP magic. This will stop the packets closer to the source and effectively relieve network congestion.

Unfortunately, due to performance and security concerns only a handful of large ISP's allow inter-AS flowspec rules. Still - it's worth a try. Check if your ISP is willing to accept flowspec from your BGP router!

At Cloudflare we maintain an intra-AS flowspec infrastructure, and we have plenty of war stories about it.

Summary

In this blog post we've given details of three popular reflection attack vectors: NTP, SSDP and DNS. We discussed how the Cloudflare Anycast network helps us avoid a single choke point. In most cases dealing with reflection attacks is not rocket science though sufficient network capacity is needed and simple firewall rules are usually enough to cope.

The types of DDoS attacks we see from other vectors (such as IoT botnets) are another matter. They tend to be much larger and require specialized, automatic DDoS mitigation. And, of course, there are many DDoS attacks that occur using techniques other than reflection and not just using UDP.

Whether you face DDoS attacks of 10Gbps+, 100Gbps+ or 1Tbps+, Cloudflare can mitigate them.

Lullabot: Invaders! Securing "Smart" Devices on a Home Network

Planet Drupal -

As a part of Lullabot’s security team, we’ve been keeping track of how the Internet of Things plays a role in our company security. Since we’re fully distributed, each employee works day-to-day over their home internet connection. This subreddit reminds us that most “smart” devices are actually quite dumb as far as security goes. With malware like Mirai actively focusing on home IoT devices including cameras, we know that anything we plug in will be under constant assault. However, there can be significant utility in connecting physical devices to your local network. So, my question: is it possible to connect an “IoT” device to my home network securely, even when it has known security issues?

An opportunity presented itself when we needed to buy a new baby monitor that supported multiple cameras. The Motorola MBP853CONNECT was on sale, and included both Wifi and a “regular” proprietary viewer. Let’s see how far we can get.

The Research

Before starting, I wanted to know if anyone else had done any testing with this model of camera. After searching for “motorola hubble security” (Hubble is the name of the mobile app), I came across Push To Hack: Reverse engineering an IP camera. This article goes into great detail about the many flaws they found in a different Motorola camera aimed at outdoor use. Given that both cameras are made by Binatone, and connect to the same remote services, it seemed likely that the MBP853 was subject to similar vulnerabilities. The real question was if Motorola updated all of their cameras to fix the reported bugs, or if they just updated a single line of cameras.

These articles were also great resources for figuring out what the cameras were capable of, and I wouldn’t have gotten as far in the time I had without them:

Goals

I wanted to answer these three questions about the cameras:

  1. Can the cameras be used in a purely “local” mode, without any cloud or internet connectivity at all?
  2. If not, can I allow just enough internet access to the camera so it allows local access, but blocks access to the cloud services?
  3. If I do need to use the Hubble app and cloud service, is it trustworthy enough to be sending images and sounds from my child’s bedroom?
The Infrastructure

I recently redid my home network, upgrading to an APU2 running OPNSense for routing, combined with a Unifi UAP-AC-PRO for wireless access. Both software stacks support VLANs—a way to segregate and control traffic between devices on the same ‘physical’ network. For WiFi, this means creating a separate SSID for the cameras, and assigning it a VLAN ID in the UniFi controller. Then, in OPNSense, I created a new interface with the same VLAN ID. On that interface, I enabled DHCP, and then set up basic firewall rules to block all traffic. That way, I could try setting up the camera while using Wireshark on my laptop to sniff the traffic, without worrying that I was exposing my real network to anything nefarious.

Packet Sniffing

One of the benefits of running a “real” operating system on your router is that all of our favorite network debugging tools are available, including tcpdump. Since Wireshark will be running on our local workstation, and not our router, we need to capture the network traffic to a separate file. Once I knew the network interface name using ifconfig, I then used SSH along with -w - to reroute the packet dump to my workstation. If you have enough disk space on the router, you could also dump locally and then transfer the file after.

$ ssh [email protected] tcpdump -w - -i igb0_vlan3000 > packet-dump.pcap

After setting this up, I realized that this wouldn't show traffic of the initial setup. That’s because, in setup mode, the WiFi camera broadcasts an open WiFi network. You then have to use the Android or iOS mobile app to configure the camera so it has the credentials to your real network. So, for the first packet dump, I joined my laptop to the setup network along with my phone. Since the network was completely open, I could see all traffic on the network, including the API calls made by the mobile app to the camera.

undefined Verifying the setup vulnerability Let's make sure this smart camera is using HTTPS and keeps my WiFi password secure.

I wanted to see if the same setup vulnerability documented by Context disclosing my WiFi passwords applied to this camera model. While I doubt anyone in my residential area is capturing traffic, this is a significant concern in high-density locations like apartment buildings. Also, since the cameras use the 2.4GHz and not the 5GHz band, their signal can reach pretty far, especially if all you’re trying to do is read traffic and not have a successful communication. In the OPNSense firewall, I blocked all traffic on the “camera” VLAN. Then, I made sure I had a unique, but temporary password on the WiFi network. That way, if the password was broadcast, at least I wasn’t broadcasting the password for a real network and forcing myself to reset it.

Once I started dumping traffic, I ran through the setup wizard with my phone. The wizard failed as it tests internet connectivity, but I could at least capture the initial setup traffic.

In Wireshark, I filtered to https traffic:

undefined

Oh dear. The only traffic captured is from my phone trying to reach 66.111.4.148. According to dig -x 66.111.4.148, that IP resolves to www.fastmail.com - in other words, my email app checking for messages. I was expecting to see HTTPS traffic to the camera, given that the WiFi network was completely open. Let’s look for raw HTTP traffic.

undefined

This looks promising. I can see the HTTP commands sent to the camera fetching it’s version and other information. Wireshark’s “Follow HTTP stream” feature is very useful here, helping to reconstruct conversations that are spread over multiple packets and request / response pairs. For example, if I follow the “get version” conversation at number 3399:

GET /?action=command&command=get_version HTTP/1.1 User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1.1; Nexus 6P Build/N4F26O) Host: 192.168.193.1 Connection: Keep-Alive Accept-Encoding: gzip HTTP/1.1 200 OK Proxy-Connection: Keep-Alive Connection: Close Server: nuvoton Cache-Control: no-store, no-cache, must-revalidate, pre-check=0, post-check=0, max-age=0 Pragma: no-cache Expires: 0 Content-type: text/plain get_version: 01.19.30

Let’s follow the setup_wireless command:

GET /?action=command&command=setup_wireless_save&setup=1002000071600000000606blueboxthisismypasswordcamera000000 HTTP/1.1 User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1.1; Nexus 6P Build/N4F26O) Host: 192.168.193.1 Connection: Keep-Alive Accept-Encoding: gzip HTTP/1.1 200 OK Proxy-Connection: Keep-Alive Connection: Close Server: nuvoton Cache-Control: no-store, no-cache, must-revalidate, pre-check=0, post-check=0, max-age=0 Pragma: no-cache Expires: 0 Content-type: text/plain setup_wireless_save: 0

That doesn't look good. We can see in the GET:

  1. The SSID of the previous WiFi network my phone was connected to (“bluebox”).
  2. The password for the “camera” network (thisismypassword).
  3. The SSID of that network.

Presumably, this is patched in the latest firmware update. Of course, there’s no way to get the firmware without first configuring the camera. So, I opened up the Camera VLAN to the internet (but not the rest of my local network), and updated.

That process showed another poor design in the Hubble. When checking for firmware updates, the app fetches the version number from the camera. Then, it compares that to a version fetched from ota.hubble.in… over plain HTTP.

undefined

In other words, the firmware update itself is subject to a basic MITM attack, where an attacker could block further updates from being applied. At the least, this process should be over HTTPS, ideally with certificate pinning as well. Amusingly, the OTA server is configured for HTTPS, but the certificate expired the day I was writing this section.

undefined

After the update had finished, I reset the camera to factory defaults and checked again. This time, the setup_wireless_save GET was at the least not in cleartext. However, I don’t have any trust that it’s not easily decryptable, so I’m not posting it here.

Evaluating Day-to-Day Security

Assuming that the WiFi password was at least secure from casual attackers, I proceeded to add firewall rules to allow traffic from the camera to the internet, so I could complete the setup process. This was a tedious process. tcpdump along with the OPNSense list of “blocked traffic” was very helpful here. In the end, I had to allow:

  • DNS
  • NTP for time sync
  • HTTPS
  • HTTP
  • UDP traffic

I watched the IPs and hostnames used by the camera, which were all EC2 hosted servers. The “aliases” feature in OPNSense allowed me to configure the rules by hostname, instead of dealing with constantly changing IPs. Of course, given the above security issues, I wonder how secure their DNS registrations are.

Needing to allow HTTP was a red flag to me. So, after the setup finished, I disabled all rules except DNS and NTP. Then, I added a rule to let my normal home LAN access the CAMERA VLAN. I could then access the camera with an RTSP viewer at the URL:

rtsp://user:[email protected]:6667/blinkhd/

Yes, the credentials actually are user and pass.

And tada! It looked like I had a camera I could use with my phone or laptop, or better yet at the same time as my wife. Neat stuff!

It All Falls Apart

After a fresh boot, everything seemed fine with the video streams. However, over a day or two, the streams would become more and more delayed, or would drop, and, eventually, I’d need to restart the camera. Wondering if this had something to do with my firewall rules, I re-enabled the HTTP, HTTPS, and UDP rules, and started watching the traffic.

Then, my phone started to get notification spammed.

At this point, I’d been using the cameras for about two weeks. As soon as I re-enabled access to Hubble, my phone got notifications about movement detected by the camera. I opened the first one… and there was a picture of my daughter, up in her room, in her jammies.

It was in the middle of the day, and she wasn’t home.

What I discovered is that the camera will save a still every time it detects movement, and buffer them locally until they can be sent. And, looking in Wireshark, I saw that the snapshots were being uploaded with an HTTP POST to snap.json without any encryption at all. Extracting the conversation, and then decoding the POST data (which was form data, not JSON!), I ended up with a picture.

I now had proof the camera was sending video data over the public internet without any security whatsoever. I blocked all internet access, including DNS, hoping that would still let local access work. It did!

Then, my wife and I started hearing random beeps in the middle of the night. Eventually, I tracked it to the cameras. They would beep every 15 minutes or so, as long as they didn’t have a working internet connection. This killed the cameras for home use, as they’d wake the whole family. Worse yet, even if we decided to allow internet access, if it was down in the middle of the night (our cable provider usually does maintenance at 3AM), odds are high we’d all be woken up. I emailed Motorola support, and they said there was no way to disable the beeping, other than to completely reset the cameras and not use the WiFi feature at all.

We’re now happily using the cameras as “dumb” devices.

Security Recommendations and Next Steps

Here are some ideas I had about how Motorola could secure future cameras:

  1. The initial setup problem could have been solved by using WPA2 on the camera. I’ve seen routers from ISPs work this way; the default credentials are unique per device, and printed on the bottom of the device. That would significantly mitigate the risk of a completely open setup process. Other devices include a Bluetooth radio for this purpose.
  2. Use encryption and authentication for all APIs. Of course, there are difficulties from this such as certificate management, hostname validation, and so on. However, this might be a good case where the app could validate based on a set of hardcoded properties, or accept all certificates signed by a custom CA root.
  3. Mobile apps should validate the authenticity of the camera to prevent MITM attacks. This is a solved problem that Binatone simply hasn’t implemented.
  4. Follow HTTP specifications! All “write” commands for the camera API use HTTP GETs instead of POSTs. That means that proxies or other systems may inadvertently log sensitive data. And, since there’s no authentication, it opens up the API to CSRF vulnerabilities.

In terms of recommendations to the Lullabot team, we currently recommend that any “IoT” devices be kept on completely separate networks from devices used for work. That’s usually as simple as creating a “guest” WiFi network. After this exercise, I think we’ll also recommend to treat any such devices as hostile, unless they have been proven otherwise. Remember, the “S” in “IoT” stands for “secure”.

Personally, I want to investigate hacking the camera firmware to remove the beeps entirely. I was able to capture the firmware from my phone (the app stores them in Android’s main storage), and since there’s no authentication, I’m guessing I could replace the beeps with silence, assuming they are WAV or MP3 files.

In the future, I’m hoping to find an IoT vendor with a security record that matches Apple’s, who is clearly the leader in mobile security. Until then, I’ll be sticking with dumb devices in my home.

Invaders! Securing "Smart" Devices on a Home Network

Lullabot -

As a part of Lullabot’s security team, we’ve been keeping track of how the Internet of Things plays a role in our company security. Since we’re fully distributed, each employee works day-to-day over their home internet connection. This subreddit reminds us that most “smart” devices are actually quite dumb as far as security goes. With malware like Mirai actively focusing on home IoT devices including cameras, we know that anything we plug in will be under constant assault. However, there can be significant utility in connecting physical devices to your local network. So, my question: is it possible to connect an “IoT” device to my home network securely, even when it has known security issues?

An opportunity presented itself when we needed to buy a new baby monitor that supported multiple cameras. The Motorola MBP853CONNECT was on sale, and included both Wifi and a “regular” proprietary viewer. Let’s see how far we can get.

The Research

Before starting, I wanted to know if anyone else had done any testing with this model of camera. After searching for “motorola hubble security” (Hubble is the name of the mobile app), I came across Push To Hack: Reverse engineering an IP camera. This article goes into great detail about the many flaws they found in a different Motorola camera aimed at outdoor use. Given that both cameras are made by Binatone, and connect to the same remote services, it seemed likely that the MBP853 was subject to similar vulnerabilities. The real question was if Motorola updated all of their cameras to fix the reported bugs, or if they just updated a single line of cameras.

These articles were also great resources for figuring out what the cameras were capable of, and I wouldn’t have gotten as far in the time I had without them:

Goals

I wanted to answer these three questions about the cameras:

  1. Can the cameras be used in a purely “local” mode, without any cloud or internet connectivity at all?
  2. If not, can I allow just enough internet access to the camera so it allows local access, but blocks access to the cloud services?
  3. If I do need to use the Hubble app and cloud service, is it trustworthy enough to be sending images and sounds from my child’s bedroom?
The Infrastructure

I recently redid my home network, upgrading to an APU2 running OPNSense for routing, combined with a Unifi UAP-AC-PRO for wireless access. Both software stacks support VLANs—a way to segregate and control traffic between devices on the same ‘physical’ network. For WiFi, this means creating a separate SSID for the cameras, and assigning it a VLAN ID in the UniFi controller. Then, in OPNSense, I created a new interface with the same VLAN ID. On that interface, I enabled DHCP, and then set up basic firewall rules to block all traffic. That way, I could try setting up the camera while using Wireshark on my laptop to sniff the traffic, without worrying that I was exposing my real network to anything nefarious.

Packet Sniffing

One of the benefits of running a “real” operating system on your router is that all of our favorite network debugging tools are available, including tcpdump. Since Wireshark will be running on our local workstation, and not our router, we need to capture the network traffic to a separate file. Once I knew the network interface name using ifconfig, I then used SSH along with -w - to reroute the packet dump to my workstation. If you have enough disk space on the router, you could also dump locally and then transfer the file after.

$ ssh [email protected] tcpdump -w - -i igb0_vlan3000 > packet-dump.pcap

After setting this up, I realized that this wouldn't show traffic of the initial setup. That’s because, in setup mode, the WiFi camera broadcasts an open WiFi network. You then have to use the Android or iOS mobile app to configure the camera so it has the credentials to your real network. So, for the first packet dump, I joined my laptop to the setup network along with my phone. Since the network was completely open, I could see all traffic on the network, including the API calls made by the mobile app to the camera.

undefined Verifying the setup vulnerability Let's make sure this smart camera is using HTTPS and keeps my WiFi password secure.

I wanted to see if the same setup vulnerability documented by Context disclosing my WiFi passwords applied to this camera model. While I doubt anyone in my residential area is capturing traffic, this is a significant concern in high-density locations like apartment buildings. Also, since the cameras use the 2.4GHz and not the 5GHz band, their signal can reach pretty far, especially if all you’re trying to do is read traffic and not have a successful communication. In the OPNSense firewall, I blocked all traffic on the “camera” VLAN. Then, I made sure I had a unique, but temporary password on the WiFi network. That way, if the password was broadcast, at least I wasn’t broadcasting the password for a real network and forcing myself to reset it.

Once I started dumping traffic, I ran through the setup wizard with my phone. The wizard failed as it tests internet connectivity, but I could at least capture the initial setup traffic.

In Wireshark, I filtered to https traffic:

undefined

Oh dear. The only traffic captured is from my phone trying to reach 66.111.4.148. According to dig -x 66.111.4.148, that IP resolves to www.fastmail.com - in other words, my email app checking for messages. I was expecting to see HTTPS traffic to the camera, given that the WiFi network was completely open. Let’s look for raw HTTP traffic.

undefined

This looks promising. I can see the HTTP commands sent to the camera fetching it’s version and other information. Wireshark’s “Follow HTTP stream” feature is very useful here, helping to reconstruct conversations that are spread over multiple packets and request / response pairs. For example, if I follow the “get version” conversation at number 3399:

GET /?action=command&command=get_version HTTP/1.1 User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1.1; Nexus 6P Build/N4F26O) Host: 192.168.193.1 Connection: Keep-Alive Accept-Encoding: gzip HTTP/1.1 200 OK Proxy-Connection: Keep-Alive Connection: Close Server: nuvoton Cache-Control: no-store, no-cache, must-revalidate, pre-check=0, post-check=0, max-age=0 Pragma: no-cache Expires: 0 Content-type: text/plain get_version: 01.19.30

Let’s follow the setup_wireless command:

GET /?action=command&command=setup_wireless_save&setup=1002000071600000000606blueboxthisismypasswordcamera000000 HTTP/1.1 User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1.1; Nexus 6P Build/N4F26O) Host: 192.168.193.1 Connection: Keep-Alive Accept-Encoding: gzip HTTP/1.1 200 OK Proxy-Connection: Keep-Alive Connection: Close Server: nuvoton Cache-Control: no-store, no-cache, must-revalidate, pre-check=0, post-check=0, max-age=0 Pragma: no-cache Expires: 0 Content-type: text/plain setup_wireless_save: 0

That doesn't look good. We can see in the GET:

  1. The SSID of the previous WiFi network my phone was connected to (“bluebox”).
  2. The password for the “camera” network (thisismypassword).
  3. The SSID of that network.

Presumably, this is patched in the latest firmware update. Of course, there’s no way to get the firmware without first configuring the camera. So, I opened up the Camera VLAN to the internet (but not the rest of my local network), and updated.

That process showed another poor design in the Hubble. When checking for firmware updates, the app fetches the version number from the camera. Then, it compares that to a version fetched from ota.hubble.in… over plain HTTP.

undefined

In other words, the firmware update itself is subject to a basic MITM attack, where an attacker could block further updates from being applied. At the least, this process should be over HTTPS, ideally with certificate pinning as well. Amusingly, the OTA server is configured for HTTPS, but the certificate expired the day I was writing this section.

undefined

After the update had finished, I reset the camera to factory defaults and checked again. This time, the setup_wireless_save GET was at the least not in cleartext. However, I don’t have any trust that it’s not easily decryptable, so I’m not posting it here.

Evaluating Day-to-Day Security

Assuming that the WiFi password was at least secure from casual attackers, I proceeded to add firewall rules to allow traffic from the camera to the internet, so I could complete the setup process. This was a tedious process. tcpdump along with the OPNSense list of “blocked traffic” was very helpful here. In the end, I had to allow:

  • DNS
  • NTP for time sync
  • HTTPS
  • HTTP
  • UDP traffic

I watched the IPs and hostnames used by the camera, which were all EC2 hosted servers. The “aliases” feature in OPNSense allowed me to configure the rules by hostname, instead of dealing with constantly changing IPs. Of course, given the above security issues, I wonder how secure their DNS registrations are.

Needing to allow HTTP was a red flag to me. So, after the setup finished, I disabled all rules except DNS and NTP. Then, I added a rule to let my normal home LAN access the CAMERA VLAN. I could then access the camera with an RTSP viewer at the URL:

rtsp://user:[email protected]:6667/blinkhd/

Yes, the credentials actually are user and pass.

And tada! It looked like I had a camera I could use with my phone or laptop, or better yet at the same time as my wife. Neat stuff!

It All Falls Apart

After a fresh boot, everything seemed fine with the video streams. However, over a day or two, the streams would become more and more delayed, or would drop, and, eventually, I’d need to restart the camera. Wondering if this had something to do with my firewall rules, I re-enabled the HTTP, HTTPS, and UDP rules, and started watching the traffic.

Then, my phone started to get notification spammed.

At this point, I’d been using the cameras for about two weeks. As soon as I re-enabled access to Hubble, my phone got notifications about movement detected by the camera. I opened the first one… and there was a picture of my daughter, up in her room, in her jammies.

It was in the middle of the day, and she wasn’t home.

What I discovered is that the camera will save a still every time it detects movement, and buffer them locally until they can be sent. And, looking in Wireshark, I saw that the snapshots were being uploaded with an HTTP POST to snap.json without any encryption at all. Extracting the conversation, and then decoding the POST data (which was form data, not JSON!), I ended up with a picture.

I now had proof the camera was sending video data over the public internet without any security whatsoever. I blocked all internet access, including DNS, hoping that would still let local access work. It did!

Then, my wife and I started hearing random beeps in the middle of the night. Eventually, I tracked it to the cameras. They would beep every 15 minutes or so, as long as they didn’t have a working internet connection. This killed the cameras for home use, as they’d wake the whole family. Worse yet, even if we decided to allow internet access, if it was down in the middle of the night (our cable provider usually does maintenance at 3AM), odds are high we’d all be woken up. I emailed Motorola support, and they said there was no way to disable the beeping, other than to completely reset the cameras and not use the WiFi feature at all.

We’re now happily using the cameras as “dumb” devices.

Security Recommendations and Next Steps

Here are some ideas I had about how Motorola could secure future cameras:

  1. The initial setup problem could have been solved by using WPA2 on the camera. I’ve seen routers from ISPs work this way; the default credentials are unique per device, and printed on the bottom of the device. That would significantly mitigate the risk of a completely open setup process. Other devices include a Bluetooth radio for this purpose.
  2. Use encryption and authentication for all APIs. Of course, there are difficulties from this such as certificate management, hostname validation, and so on. However, this might be a good case where the app could validate based on a set of hardcoded properties, or accept all certificates signed by a custom CA root.
  3. Mobile apps should validate the authenticity of the camera to prevent MITM attacks. This is a solved problem that Binatone simply hasn’t implemented.
  4. Follow HTTP specifications! All “write” commands for the camera API use HTTP GETs instead of POSTs. That means that proxies or other systems may inadvertently log sensitive data. And, since there’s no authentication, it opens up the API to CSRF vulnerabilities.

In terms of recommendations to the Lullabot team, we currently recommend that any “IoT” devices be kept on completely separate networks from devices used for work. That’s usually as simple as creating a “guest” WiFi network. After this exercise, I think we’ll also recommend to treat any such devices as hostile, unless they have been proven otherwise. Remember, the “S” in “IoT” stands for “secure”.

Personally, I want to investigate hacking the camera firmware to remove the beeps entirely. I was able to capture the firmware from my phone (the app stores them in Android’s main storage), and since there’s no authentication, I’m guessing I could replace the beeps with silence, assuming they are WAV or MP3 files.

In the future, I’m hoping to find an IoT vendor with a security record that matches Apple’s, who is clearly the leader in mobile security. Until then, I’ll be sticking with dumb devices in my home.

Drupal.org blog: What’s new on Drupal.org? - April 2017

Planet Drupal -

Read our Roadmap to understand how this work falls into priorities set by the Drupal Association with direction and collaboration from the Board and community.

At the end of April we joined the community at DrupalCon Baltimore. We met with many of you there, gave our update at the public board meeting, and hosted a panel detailing the last 6 months worth of changes on Drupal.org. If you weren't able to join us for this con, we hope to see you in Vienna!

Drupal.org updates DrupalCon Vienna Full Site Launched!

Speaking of Vienna, in April we launched the full site for DrupalCon Vienna which will take place from September 26-29th, 2017. If you're going to join us in Europe you can book your hotel now, or submit a session. Registration for the event will be opening soon!

DrupalCon Nashville Announced with new DrupalCon Brand

Each year at DrupalCon the location of the next conference is held as closely guarded secret; the topic of speculation, friendly bets, and web crawlers looking for 403 pages. Per tradition, at the closing session we unveiled the next location for DrupalCon North America - Nashville, TN taking place from April 9-13th in 2018. But this year there was an extra surprise.

We've unveiled the new brand for DrupalCon, which you will begin to see as the new consistent identity for the event from city to city and year to year. You'll still see the unique character of the city highlighted for each regional event, but with an overarching brand that creates a consistent voice for the event.

Starring Projects

Users on Drupal.org may now star their favorite projects - making it easier to find favorite modules and themes for future projects, and giving maintainers a new dimension of feedback to judge their project's popularity. Users can find a list of the projects they've starred on the user profile. Over time we'll begin to factor the number of star's into a project's ranking in search results.

At the same time that we made this change, we've also added a quick configuration for managing notification settings on a per-project basis. Users can opt to be notified of all issues for a project, only issues they've followed, or no issues. While these notification options have existed for some time, this new UI makes it easier than ever to control issue notifications in your inbox.

Project Browsing Improvements

One of the important functions of Drupal.org is to help Drupal site builders find the distributions, modules, and themes, that are the best fit for their needs. In April, we spent some time improving project browsing and discovery.

Search is now weighted by project usage so the most widely used modules for a given search phrase will be more likely to be the top result.

We've also added a filter to the project browsing pages to allow you to filter results by the presence of a supported, stable release. This should make it easier for site builders to sort out mature modules from those still in initial development.

Better visual separation of Documentation Guide description and contents

In response to user feedback, we've updated the visual display of Documentation Guides, to create a clearer distinction between the guide description text and the teaser text for the content within the guides.

Promoting hosting listings on the Download & Extend page

To leverage Drupal to the fullest requires a good hosting partner, and so we've begun promoting our hosting listings on the Download and Extend page. We want Drupal.org to provide every Drupal evaluator with all of the tools they need to achieve success—from the code itself, to professional services, to hosting, and more.

Composer Sub-tree splits of Drupal are now available

For developers using Composer to manage their projects, sub-tree splits of Drupal Core and Components are now available. This allows php developers to use components of Drupal in their projects, without having to depend on Drupal in its entirety.

DrupalCI Automatic Requeuing of Tests in the event of a CI Error

In the past, if the DrupalCI system encountered an error when attempting to run a test, the test would simply return a "CI error" message, and the user who submitted the test had to manually submit a new test. These errors would also cause the issues to be marked as 'Needs work' - potentially resetting the status of an otherwise RTBC issue.

We have updated Drupal.org's integration with DrupalCI so that instead of marking issues as needs work in the event of a CI Error, Drupal.org will instead automatically queue a retest.

Bugfix: Only retest one environment when running automatic RTBC retests

Finally, we've fixed a bug with the DrupalCI's automatic RTBC retest system. When Drupal HEAD changes, any RTBC patches are automatically retested to ensure that they still apply. It is only necessary to retest against the default or last-used test environment to ensure that the patch will work, but the automatic retests were being tested against every configured environment. We've fixed this issue, shortening queue times during a string of automatic retests and saving testing resources for the project.

———

As always, we’d like to say thanks to all the volunteers who work with us, and to the Drupal Association Supporters, who made it possible for us to work on these projects. In particular we want to thank:

If you would like to support our work as an individual or an organization, consider becoming a member of the Drupal Association.

Follow us on Twitter for regular updates: @drupal_org, @drupal_infra

What’s new on Drupal.org? - April 2017

Drupal News -

Read our Roadmap to understand how this work falls into priorities set by the Drupal Association with direction and collaboration from the Board and community.

At the end of April we joined the community at DrupalCon Baltimore. We met with many of you there, gave our update at the public board meeting, and hosted a panel detailing the last 6 months worth of changes on Drupal.org. If you weren't able to join us for this con, we hope to see you in Vienna!

Drupal.org updates DrupalCon Vienna Full Site Launched!

Speaking of Vienna, in April we launched the full site for DrupalCon Vienna which will take place from September 26-29th, 2017. If you're going to join us in Europe you can book your hotel now, or submit a session. Registration for the event will be opening soon!

DrupalCon Nashville Announced with new DrupalCon Brand

Each year at DrupalCon the location of the next conference is held as closely guarded secret; the topic of speculation, friendly bets, and web crawlers looking for 403 pages. Per tradition, at the closing session we unveiled the next location for DrupalCon North America - Nashville, TN taking place from April 9-13th in 2018. But this year there was an extra surprise.

We've unveiled the new brand for DrupalCon, which you will begin to see as the new consistent identity for the event from city to city and year to year. You'll still see the unique character of the city highlighted for each regional event, but with an overarching brand that creates a consistent voice for the event.

Starring Projects

Users on Drupal.org may now star their favorite projects - making it easier to find favorite modules and themes for future projects, and giving maintainers a new dimension of feedback to judge their project's popularity. Users can find a list of the projects they've starred on the user profile. Over time we'll begin to factor the number of star's into a project's ranking in search results.

At the same time that we made this change, we've also added a quick configuration for managing notification settings on a per-project basis. Users can opt to be notified of all issues for a project, only issues they've followed, or no issues. While these notification options have existed for some time, this new UI makes it easier than ever to control issue notifications in your inbox.

Project Browsing Improvements

One of the important functions of Drupal.org is to help Drupal site builders find the distributions, modules, and themes, that are the best fit for their needs. In April, we spent some time improving project browsing and discovery.

Search is now weighted by project usage so the most widely used modules for a given search phrase will be more likely to be the top result.

We've also added a filter to the project browsing pages to allow you to filter results by the presence of a supported, stable release. This should make it easier for site builders to sort out mature modules from those still in initial development.

Better visual separation of Documentation Guide description and contents

In response to user feedback, we've updated the visual display of Documentation Guides, to create a clearer distinction between the guide description text and the teaser text for the content within the guides.

Promoting hosting listings on the Download & Extend page

To leverage Drupal to the fullest requires a good hosting partner, and so we've begun promoting our hosting listings on the Download and Extend page. We want Drupal.org to provide every Drupal evaluator with all of the tools they need to achieve success—from the code itself, to professional services, to hosting, and more.

Composer Sub-tree splits of Drupal are now available

For developers using Composer to manage their projects, sub-tree splits of Drupal Core and Components are now available. This allows php developers to use components of Drupal in their projects, without having to depend on Drupal in its entirety.

DrupalCI Automatic Requeuing of Tests in the event of a CI Error

In the past, if the DrupalCI system encountered an error when attempting to run a test, the test would simply return a "CI error" message, and the user who submitted the test had to manually submit a new test. These errors would also cause the issues to be marked as 'Needs work' - potentially resetting the status of an otherwise RTBC issue.

We have updated Drupal.org's integration with DrupalCI so that instead of marking issues as needs work in the event of a CI Error, Drupal.org will instead automatically queue a retest.

Bugfix: Only retest one environment when running automatic RTBC retests

Finally, we've fixed a bug with the DrupalCI's automatic RTBC retest system. When Drupal HEAD changes, any RTBC patches are automatically retested to ensure that they still apply. It is only necessary to retest against the default or last-used test environment to ensure that the patch will work, but the automatic retests were being tested against every configured environment. We've fixed this issue, shortening queue times during a string of automatic retests and saving testing resources for the project.

———

As always, we’d like to say thanks to all the volunteers who work with us, and to the Drupal Association Supporters, who made it possible for us to work on these projects. In particular we want to thank:

If you would like to support our work as an individual or an organization, consider becoming a member of the Drupal Association.

Follow us on Twitter for regular updates: @drupal_org, @drupal_infra

myDropWizard.com: Drupal 6 security update for AES

Planet Drupal -

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Critical security release for the AES encryption module.

The AES module provides an API for encrypting and decrypting data via AES. It also allows storing Drupal passwords encrypted in the database (rather than hashed) which can allow site administrators with high enough permissions to view user passwords.

Previously, the module implemented AES poorly, such that the encryption was weakened and could have potentially made it easier for an attacker to decrypt given enough examples of the encrypted data.

(A note about the timing of this release: the AES module was unsupported on March 1st, and we started working on a fix right away in the D6LTS queue. We usually release D6LTS patches the same day the D7/D8 patches are posted or two weeks after a module is unsupported, however, in this case we had only a single Enterprise customer using AES and so we worked on it according to a timeline dictated by them, which involved testing their custom modules using the AES API with their team. So, we're releasing this after it's been fully tested and deployed for our one affected customer - if more customers had been affect it would have been released same-day, as usual.)

Here you can download the Drupal 6 patch.

If you have a Drupal 6 site using the AES module, we recommend you update immediately! We have already deployed the patch for all of our Drupal 6 Long-Term Support clients. :-)

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on Drupal.org).

Jeff Geerling's Blog: Drupal VM does Docker

Planet Drupal -

Drupal VM has used Vagrant and (usually) VirtualBox to run Drupal infrastructure locally since its inception. But ever since Docker became 'the hot new thing' in infrastructure tooling, I've been asked when Drupal VM will convert to using Docker.

The answer to that question is a bit nuanced; Drupal VM has been using Docker to run its own integration tests for over a year (that's how I run tests on seven different OSes using Travis CI). And technically, Drupal VM's core components have always been able to run inside Docker containers (most of them use Docker-based integration tests as well).

But Docker usage was always an undocumented and unsupported feature of Drupal VM. But no longer—with 4.5.0, Drupal VM now supports Docker as an experimental alternative to Vagrant + VirtualBox, and you can use Drupal VM with Docker in one of two ways:

Flocon de toile | Freelance Drupal: Render programmatically a unique field from a node or an entity with Drupal 8

Planet Drupal -

It may sometimes be necessary to render a single field of a content or entity. For example, for a simplified display of contents relating to the content consulted, the use of specific fields in other contexts, etc. Obtaining programmatically the rendering of a field may be problematic for the Drupal 8 cache invalidation system, since the resulting render array would not contain the cache tags of the source entity. Let's take a look at some solutions available to us.

myDropWizard.com: Presentation: Docker & Drupal for local development

Planet Drupal -

Last week, I presented on "Docker & Drupal for local development" at Drupal414, the local Drupal meetup in Milwaukee, WI.

It included:

  • a basic introduction to the why's and how's of Docker,
  • a couple live demos, and
  • the the details of how we use Docker as our local development environment to support & maintain hundreds of Drupal sites here at myDropWizard

The presentation wasn't recorded at the time, but it was so well received that I decided to record it again at my desk so I could share it with a wider audience. :-)

Here's the video:

Video of Docker & Drupal for Local Development

(Sorry, for the poor audio! This was recorded sort of spontaneously...)

And here are the slides.

Please leave any questions or comments in the comments section below!

Pages

Subscribe to Cruiskeen Consulting LLC aggregator