Feed aggregator

Dries Buytaert: How to move an entire government to a new digital platform

Planet Drupal -

In this era of Open Government, constituents expect to be offered great services online. This can require moving an entire government to a new digital platform in order to deliver ambitious digital experiences that support the needs of citizens. It takes work, but many governments from the United States to Australia have demonstrated that with the right technology and strategy in place, governments can successfully adopt a new platform. Unfortunately this is not always the case.

How not to do it: Canada.ca

In 2014, the Government of Canada began a project to move all of its web pages onto a single site, Canada.ca. A $1.54 million contract for a content management system was awarded to a proprietary vendor in 2015. Fast forward to today, and the project is a year behind schedule and 10x over budget. The contract is now approaching $10 million. As only 0.05% of the migration to Canada.ca has been completed, many consider the current project to be disservice to its citizens.

I find the impending outcomes of this project to be disheartening as current timelines suggest that the migration will continue to be delayed, run over budget, and strain taxpayers. While I hope that Canada.ca will develop into a valuable resource for its citizens, I agree with Tom Cochran, Acquia's Chief Digital Strategist for Public Sector -- who ran digital platforms at the White House and U.S. Department of State -- that the prospects for Canada.ca are dim given the way the project was designed and is being implemented.

The root of Canada.ca's problem appears to be the decision to migrate 1,500 departments and 17 million pages into a single site. I'm guessing that the goal of having a single site is to offer citizens a central entry point to connect with their government. A single site strategy can be extremely effective, for example the City of Boston's single site is home to over 200,000 web page spanning 120 city departments, and offers a truly user-centric experience. With 17 million pages to migrate, Canada.ca is eighty-five times bigger than Boston.gov. A project of this magnitude should have considered using a multi-site approach where different agencies and departments have their own sites, but can use a common platform, toolset and shared infrastructure.

While difficulties with Canada.ca may have started with the ambitious attempt to move every department to a single domain, the complexities of this strategy are likely amplified through the implementation of a single-source proprietary solution. I find it unfortunate that Canada's procurement models did not favor Open Source. The Canadian government has a tenured history of utilizing Open Source, and there is a lot of existing Drupal talent in the country. In rejecting an open platform, the Canadian Government lost the opportunity to engage a skilled community of native, Open Source developers.

How to do it: Australian Government

Transforming an entire nation's digital strategy is challenging, but other public sector leaders have proven it is possible. Take the Australian Government. In 2015, John Sheridan, Sharyn Clarkson and their team in the Department of Finance moved their department's site from a legacy environment to Drupal and the cloud. The Department of Finance's success has grown into the Drupal distribution govCMS, which is currently supporting over 52 government agencies across 6 jurisdictions in Australia. Much like Canada.ca, the goal of govCMS is to provide citizens with a more intuitive platform to engage with their government.

The guiding principle of govCMS is to govern but to not seek control. Each government department requires flexibility to service the needs of their particular audiences. While single-site solutions do work as umbrellas for some organizations, the City of Boston being a great example, most large (government) organizations that have a state-of-the-art approach follow a hub and spoke model where different sites share code, templates and infrastructure. While sharing is strongly encouraged it is not required. This allows each department to independently innovate and adopt the platform how they choose.

The Open Source nature of govCMS has encouraged both innovation and collaboration across many Australian departments. One of the most remarkable examples of this is that a federal agency and a state agency coordinated their development efforts to build a data visualization capability on an open data CKAN repository. The Department of Environment initiated the development of the CKAN module necessary to pull and analyze data from a variety of departments. The Victorian Department of Premier and Cabinet recognized that they too could utilize the module to propel their budget report and aided in the co-development of the govCMS CKAN. This is an incredible example of how Open Source allows agencies to extend functionality across departments, regardless of vendor involvement. By setting up a model which removed the barriers to share, govCMS has provided Australia the freedom to truly innovate.

Seeing is believing: shifting the prevailing mindset

A distributed model using multiple sites to leverage an Open Source platform where infrastructure, code and templates are shared allows for governance and innovation to co-exist. I've written about this model here in a post about Loosening control the Open Source Way. I believe that a multi-site approach based on Open Source is the only way to move an entire government to a new digital strategy and platform.

It can be incredibly hard for organizations to understand this. After all, this is not about product features, technical capabilities or commercial support, but about a completely different way of working and innovating. It's a hard sell because we have to change the lens through which organizations see the world; away from procuring proprietary software that provides perceived safety and control, to a world that allows frictionless innovation and sharing through the loosening of control without losing control. For us to successfully market and sell the innovation that comes out of Drupal, Open Source and cloud, we have to shift how people think and challenge the prevailing model.

In many ways, organizations have to see it to believe it. What is exciting about the Australian government is that it helps others see the potential of a decentralized service model predicated on Open Source software with a Drupal distribution at its heart. The Australian government has created an ecosystem of frictionless sharing that is cheaper, faster, and enables better results.

What is next for Canada?

It’s difficult for me to see a light at the end of the tunnel for Canadian citizens. While the Canadian government can stay the course -– and all indications so far are that they will -- that path has a high price tag, long delays and slow innovation. An alternative would be for Ottawa to hit the pause button and reassess their strategy. They could look externally to how governments in Washington, Canberra, and countless others approached their mission to support the digital needs of its citizens. I know that there are countless Drupal experts working both within the government and at dozens of Drupal agencies throughout Canada that are eager to show their government a better way forward.

OpenLucius: 17 Tips en tricks for OpenLucius end-users | A Drupal work management platform

Planet Drupal -

The use of basic functions in OpenLucius are pretty clear most of the time. Think of basics like: adding groups, members, tasks, messages, files, folders and book pages.

But there are many useful functions in Drupal distro OpenLucius that make working in it more fun and easy, but those functions are a little less obvious for end-users.

17 tips and tricks to work faster, smarter and more efficient:

Groups 1. Choose which apps are enabled by group:

2. Determine the order of the apps in a group:

You can determine the order of the enabled apps in a group, the foremost positioned app will automatically be default: it opens when you click on the group. By default this is "Overview", but you can open 'Messages' as first by dragging it to the front for example.

3. Archive a group:

READ MORE >

Agiledrop.com Blog: AGILEDROP: Challenges ahead of Drupal in 2017

Planet Drupal -

What is done is done. What happened, happened. There are only a few days left until the year 2016 finishes. In that time Drupal 8 turned one, some fascinating new sites and product were launched, a lot of new modules were created, many problems were solved ... but still, there are some reserves. Therefore, we'll look at where Drupal can still improve in the following year. Drupal Community One of the main problems is that Drupal users are not as active in the community as they should be. So, in 2017 Drupalistas, who professionally work with Drupal, should be more active in the community.… READ MORE

Platform.sh: Next Wave PHP now supported

Planet Drupal -

We were hoping to have this announcement out in time for Christmas gift but it was not to be. Instead it’s an early New Years gift. Nonetheless, we’re happy to announce a whole slew of new options for PHP projects to make them faster and more robust on Platform.sh: PHP 7.1 support, async support, and PThreads support.

Lullabot: Building Views Query Plugins for Drupal 8, Part 2

Planet Drupal -

Welcome to the second installment of our three-part series on writing Views query plugins. In part one, we talked about the kind of thought and design work that must take place before coding begins. In part two, we’ll start coding our plugin and end up with a basic functioning example.

We’ve talked explicitly about needing to build a Views query plugin to accomplish our goal of having a customized Fitbit leaderboard, but we’ll also need field plugins to expose that data to Views, filter plugins to limit results sets, and, potentially, relationship plugins to span multiple API endpoints. There’s a lot to do, so let's dive in.
 

Getting started

In Drupal 8, plugins are the standard replacement for info hooks. If you haven’t yet had cause to learn about the plugin system in Drupal 8, I suggest the Drupalize.Me Drupal 8 Module Development Guide, which includes an excellent primer on Drupal 8 plugins.

Step 1: Create a views.inc file

Although most Views hooks required for Views plugins have gone the way of the dodo, there is still one that survives in Drupal 8: hook_views_data. The Views module looks for that hook in a file named [module].views.inc, which lives in your module's root directory. hook_views_data and hook_views_data_alter are the main things you’ll find here, but since Views is loading this file automatically for you, take advantage and put any Views-related procedural code you may need in this file.

Step 2: Implement hook_views_data()

Usually hook_views_data is used to describe the SQL tables that a module is making available to Views. However, in the case of a query plugin it is used to describe the data provided by the external service.

/** * Implements hook_views_data(). */ function fitbit_views_example_views_data() { $data = []; // Base data. $data['fitbit_profile']['table']['group'] = t('Fitbit profile'); $data['fitbit_profile']['table']['base'] = [ 'title' => t('Fitbit profile'), 'help' => t('Fitbit profile data provided by the Fitbit API\'s User Profile endpoint.'), 'query_id' => 'fitbit', ]; return $data; }

The format of the array is usually $data[table_name]['table'], but since there is no table I’ve used a short name for the Fitbit API endpoint, prefixed by the module name instead. So far, I’ve found that exposing each remote endpoint as a Views “table”—one-to-one—works well. It may be different for your implementation. This array needs to declare two keys—‘group’ and ‘base.' When Views UI refers to your data, it uses the ‘group’ value as a prefix. Whereas, the ‘base’ key alerts Views that this table is a base table—a core piece of data available to construct Views from (just like nodes, users and the like). The value of the ‘base’ key is an associative array with a few required keys. The ‘title’ and ‘help’ keys are self-explanatory and are also used in the Views UI. When you create a new view, ‘title’ is what shows up in the “Show” drop-down under “View Settings”:

undefined

The ‘query_id’ key is the most important. The value is the name of our query plugin. More on that later.

Step 3: Expose fields

The data you get out of a remote API isn’t going to be much use to people unless they have fields they can display. These fields are also exposed by hook_views_data.

// Fields. $data['fitbit_profile']['display_name'] = [ 'title' => t('Display name'), 'help' => t('Fitbit users\' display name.'), 'field' => [ 'id' => 'standard', ], ]; $data['fitbit_profile']['average_daily_steps'] = [ 'title' => t('Average daily steps'), 'help' => t('The average daily steps over all the users logged Fitbit data.'), 'field' => [ 'id' => 'numeric', ], ]; $data['fitbit_profile']['avatar'] = [ 'title' => t('Avatar'), 'help' => t('Fitbit users\' account picture.'), 'field' => [ 'id' => 'fitbit_avatar', ], ]; $data['fitbit_profile']['height'] = [ 'title' => t('Height'), 'help' => t('Fibit users\'s height.'), 'field' => [ 'id' => 'numeric', 'float' => TRUE, ], ];

The keys that make up a single field definition include ‘title’ and ‘help’— again self-explanatory—used in the Views UI. The ‘field’ key is used to tell Views how to handle this field. There is only one required sub-key, ‘id,' and it’s the name of a Views field plugin. 

The Views module includes a handful of field plugins, and if your data fits one of them, you can use it without implementing your own. Here we use standard, which works for any plain text data, and numeric, which works for, well, numeric data. There are a handful of others. Take a look inside /core/modules/views/src/Plugin/views/field to see all of the field plugins Views provides out-of-the-box. Find the value for ‘id’ in each field plugin's annotation. As an aside, Views eats its own dog food and implements a lot of its core functionality as Views plugins, providing examples for when you're implementing your Views plugins. A word of caution, many core Views plugins assume they are operating with an SQL-based query back-end. As such you’ll want to be careful mixing core Views plugins in with your custom query plugin implementation. We’ll mitigate some of this when we implement our query plugin shortly.

Step 4: Field plugins

Sometimes data from your external resource doesn’t line up with a field plugin that ships with Views core. In these cases, you need to implement a field plugin. For our use case, avatar is such a field. The API returns a URI for the avatar image. We’ll want Views to render that as an <img> tag, but Views core doesn’t offer a field plugin like that. You may have noticed that we set a field ‘id’ of ‘fitbit_avatar’ in hook_views_data above. That’s the name of our custom Views field plugin, which looks like this:

<?php namespace Drupal\fitbit_views_example\Plugin\views\field; use Drupal\views\Plugin\views\field\FieldPluginBase; use Drupal\views\ResultRow; /** * Class Avatar * * @ViewsField("fitbit_avatar") */ class Avatar extends FieldPluginBase { /** * [email protected]} */ public function render(ResultRow $values) { $avatar = $this->getValue($values); if ($avatar) { return [ '#theme' => 'image', '#uri' => $avatar, '#alt' => $this->t('Avatar'), ]; } } }

Naming and file placement is important, as with any Drupal 8 plugin. Save the file at: fitbit_views_example/src/Plugin/views/field/Avatar.php. Notice the namespace follows the file path, and also notice the annotation: @ViewsField("fitbit_avatar"). The annotation declares this class as a Views field plugin with the ‘id’ ‘fitbit_avatar,' hence the use of that name back in our hook_views_data function. Also important, we're extending FieldPluginBase, which gives us a lot of base functionality for free. Yay OOO! As you can see, the render method gets the value of the field from the row and returns a render array so that it appears as an <img> tag.

Step 5: Create a class that extends QueryPluginBase

After all that setup, we’re almost ready to interact with a remote API. We have one more task: to create the class for our query plugin. Again, we’re creating a Drupal 8 plugin, and naming is important so the system knows that our plugin exists. We’ll create a file named: 

fitbit_views_example/src/Plugin/views/query/Fitbit.php 

...that looks like this:

<?php namespace Drupal\fitbit_views_example\Plugin\views\query; use Drupal\views\Plugin\views\query\QueryPluginBase; /** * Fitbit views query plugin which wraps calls to the Fitbit API in order to * expose the results to views. * * @ViewsQuery( * id = "fitbit", * title = @Translation("Fitbit"), * help = @Translation("Query against the Fitbit API.") * ) */ class Fitbit extends QueryPluginBase { }

Here we use the @ViewsQuery annotation to identify our class as a Views query plugin, declaring our ‘id’ and providing some helpful meta information. We extend QueryPluginBase to inherit a lot of free functionality. Inheritance is a recurring theme with Views plugins. I’ve yet to come across a Views plugin type that doesn’t ship with a base class to extend. At this point, we’ve got enough code implemented to see some results in the UI. We can create a new view of type Fitbit profile and add the fields we’ve defined and we’ll get this:

undefined

Not terribly exciting, we still haven’t queried the remote API, so it doesn’t actually do anything, but it’s good to stop here to make sure we haven’t made any syntax errors and that Drupal can find and use the plugins we’ve defined.

As I mentioned, parts of Views core assume an SQL-query backend. To mitigate that, we need to implement two methods which will, in a sense, ignore core Views as a way to work around this limitation.  Let’s get those out of the way:

public function ensureTable($table, $relationship = NULL) { return ''; } public function addField($table, $field, $alias = '', $params = array()) { return $field; }

ensureTable is used by Views core to make sure that the generated SQL query contains the appropriate JOINs to ensure that a given table is included in the results. In our case, we don’t have any concept of table joins, so we return an empty string, which satisfies plugins that may call this method. addField is used by Views core to limit the fields that are part of the result set. In our case, the Fitbit API has no way to limit the fields that come back in an API response, so we don’t need this. We’ll always provide values from the result set, which we defined in hook_views_data. Views takes care to only show the fields that are selected in the Views UI. To keep Views happy, we return $field, which is simply the name of the field.

Before we come to the heart of our plugin query, the execute method, we’re going to need a couple of remote services to make this work. The base Fitbit module handles authenticating users, storing their access tokens, and providing a client to query the API. In order to work our magic then, we’ll need the fitbit.client and fitbit.access_token_manager services provided by the base module. To get them, follow a familiar Drupal 8 pattern:

/** * Fitbit constructor. * * @param array $configuration * @param string $plugin_id * @param mixed $plugin_definition * @param FitbitClient $fitbit_client * @param FitbitAccessTokenManager $fitbit_access_token_manager */ public function __construct(array $configuration, $plugin_id, $plugin_definition, FitbitClient $fitbit_client, FitbitAccessTokenManager $fitbit_access_token_manager) { parent::__construct($configuration, $plugin_id, $plugin_definition); $this->fitbitClient = $fitbit_client; $this->fitbitAccessTokenManager = $fitbit_access_token_manager; } /** * [email protected]} */ public static function create(ContainerInterface $container, array $configuration, $plugin_id, $plugin_definition) { return new static( $configuration, $plugin_id, $plugin_definition, $container->get('fitbit.client'), $container->get('fitbit.access_token_manager') ); }

This is a common way of doing dependency injection in Drupal 8. We’re grabbing the services we need from the service container in the create method, and storing them on our query plugin instance in the constructor. 

Now we’re finally ready for the heart of it, the execute method:

/** * [email protected]} */ public function execute(ViewExecutable $view) { if ($access_tokens = $this->fitbitAccessTokenManager->loadMultipleAccessToken()) { $index = 0; foreach ($access_tokens as $uid => $access_token) { if ($data = $this->fitbitClient->getResourceOwner($access_token)) { $data = $data->toArray(); $row['display_name'] = $data['displayName']; $row['average_daily_steps'] = $data['averageDailySteps']; $row['avatar'] = $data['avatar']; $row['height'] = $data['height']; // 'index' key is required. $row['index'] = $index++; $view->result[] = new ResultRow($row); } } } }

The execute method is open ended. At a minimum, you’ll want to assign ResultRow objects to the $view->result[] member variable. As was mentioned in the first part of the series, the Fitbit API is atypical because we’re hitting the API once per row. For each successful request we build up an associative array, $row, where the keys are the field names we defined in hook_views_data and the values are made up of data from the API response. Here we are using the Fitbit client provided by the Fitbit base module to make a request to the User profile endpoint. This endpoint contains the data we want for a first iteration of our leaderboard, namely: display name, avatar, and average daily steps. Note that it’s important to track an index for each row. Views requires it, and without it, you’ll be scratching your head as to why Views isn’t showing your data. Finally, we create a new ResultRow object with the $row variable we built up and add it to $view->result. There are other things that are important to do in execute like paging, filtering and sorting. For now, this is enough to get us off the ground.

That’s it! We should now have a simple but functioning query plugin that can interact with the Fitbit API. After following the installation instructions for the Fitbit base module, connecting one or more Fitbit accounts and enabling the fitbit_views_example sub-module, you should be able to create a new View of type Fitbit profile, add Display name, Avatar, and Average Daily Steps fields and get a rudimentary leaderboard:

undefined Debugging problems

If the message ‘broken or missing handler’ appears when attempting to add a field or other type of handler, it usually points to a class naming problem somewhere. Go through your keys and class definitions and make sure that you’ve got everything spelled correctly. Another common issue is Drupal throwing errors because it can’t find your plugins. As with any plugin in Drupal 8, make sure your files are named correctly, put in the right folder, with the right namespace, and with the correct annotation.

Summary

Most of the work here has nothing to do with interacting with remote services at all—it is all about declaring where your data lives and what its called. Once we get past the numerous steps that are necessary for defining any Views plugins, the meat of creating a new query plugin is pretty simple.

  1. Create a class that extends QueryPluginBase
  2. Implement some empty methods to mitigate assumptions about a SQL query backend
  3. Inject any needed services
  4. Override the execute method to retrieve your data into a ResultRow object with properties named for your fields, and store that object on the results array of the Views object.

In reality, most of your work will be spent investigating the API you are interacting with and figuring out how to model the data to fit into the array of fields that Views expects.

Next steps

In the third part of this article, we’ll look at the following topics:

  1. Exposing configuration options for your query object
  2. Adding options to field plugins
  3. Creating filter plugins

Until next time!

ADCI Solutions: Cache in Drupal 8

Planet Drupal -

Cache is the important part of a development process. Everybody use cache, but not everybody is able to manage it.
A cache is a hardware or a software component that stores frequently requested pages or the parts of the pages, and these pages can be shown to users with less resources and with a faster speed than usual.
What happens to the page while it’s loading? System functions and files of all modules are turned on, settings and variables are initialized, a theme is loaded and hooks are implemented. When the cache is enabled, basic system settings load and the page loads from the cache. Obviously, in this case page loads faster.
The cache is the important component of site optimization. It’s one of the key items in the assessment of Google PageSpeed application.
Let’s have a look at how you can use cache for your site on Drupal 8. We will also go into details and talk about Cache API, auto-placeholdering and the BigPipe module. Keep on reading here

ADCI Solutions: Modern practices for creating the visual part of the web

Planet Drupal -

Summary

A modern website and its design isn’t a simple text node anymore. The modern website is the wholesome application that has its components, widgets, buttons and other managing elements. A website development approach is changing, starting from the markup creation for a content placement and ending with the ready-to-use components used for building a web page.

There are also changes in the way designers and developers interact now. That led to collaboration tools emergence: Avocode, Zeplin, Figma.

Summing all the above, we’re going to observe how component based thinking changes frameworks, how the collaboration tools help designers and developers understand each other and what tools they can use to speed up the work. React.js, Angular, Atomic Design approach and many more issues will be discussed in this article. Read the full article here.

INTRODUCTION

A modern website and its design isn’t a simple text node anymore. The modern website is the wholesome application that has its components, widgets, buttons and other managing elements. Now a designer not only considers the website as the wholesome one, but he or she also takes into account all of the components, since each of them is developed separately from the others; this particular element will have its own style, it’ll be placed at any website’s place or even switched off completely so that the design inevitably changes.

There are also changes in the way designers and developers interact now. They understand how deeply they are interconnected and trying to keep an eye on what’s going on in scope of each other's responsibility. That led to collaboration tools emergence.

Summing all the above, we’re going to observe how component based thinking changes frameworks, how the collaboration tools help designers and developers understand each other and what tools they can use to speed up the work.

COMPONENT BASED THINKING

React.js and Angular are two main tools at the front-end development market. It’s not obvious who’s taking this market over, though. Besides these two we have Ember.js, Vue.js, Polymer and many more.

What are these frameworks are good for?

React.js, for example, lets us write almost all code in JavaScript - templates are written in JSX (it’s a mix of JavaScript and HTML) - so as an output we have the interconnected components that have a wide variety of functions available. Components’ styles are written in a parallel with the components themselves.

But take into account that React only outputs HTML. So what the fuss about React.js? First of all, you know for sure how your component will render by looking at a source file. Secondly, that rather weird mix JSX makes your code cleaner. Though you cannot build an application with a React.js alone, this library helps us update View for a user.

With Angular you can create a component structure too. As it’s said in the documentation, “Angular's data binding and dependency injection eliminate much of the code you currently have to write. And it all happens within the browser, making it an ideal partner with any server technology”. Angular is perfect for Single Page applications that are getting more and more popular, so we strongly recommend using it.

Ember.js is another MVC framework. It’s a very structured and beautiful one, but a drawback is that Ember has a rather small community around itself. Ember has a lighter weight than other JS libraries, but it handles a creation of websites with a heavy client-side functionality. Again, the data binding is present. What differs Ember is that route is used as model, handlebar template as view and controller manipulates the data in the model.

Finally, we have Vue.js and Polymer: these are the libraries for components creation.

Let’s proceed to markup technologies we can use.

MARKUP TECHNOLOGIES

There are few ways of structuring your CSS/Styles. We’d love to highlight: BEM, SMACSS, CSS Modules and Atomic Design. CSS Modules is pretty similar to BEM, but the implementation technology varies. Having this in mind, one would use CSS Modules with React.js library and Angular framework.

What is BEM?

BEM is a technology developed by a Russian IT-company Yandex. Now BEM’s fame is spreading worldwide. The BEM’s markup approach - is a component’s markup and repetitive usage of the component styles for the components with the same type. Modifications are available!

BEM includes blocks, elements and modifiers.

  • Blocks can be used in the different website’s locations.
  • Elements are the parts of the block and don’t have any functionality out of it.
  • Modifiers are either the blocks or element’s features that change their outlook or behaviour.
.block_element {...} .block_element-modifier {...}

What’s good? Modules are separated from each other and there are no unexpected cascades of selectors.

What’s not that inspiring about BEM? Long namings are not that convenient to use (especially in big projects).

SMACSS stands for Scalable and Modular Architecture for CSS. This approach follows the goal of reducing an amount of code and simplifying code maintenance.

SMACSS divides styles into 5 parts.

  1. Base rules - basic styles. These are the styles of the main website’s elements: body, input, button, ul, ol, etc. In this section we mainly use tags’ selectors and attributes’ selectors, classes are used in few cases (for instance, there are selectors stylized by JavaScript).
  2. Layout rules - layout styles. Here the styles of the global elements, such as header size, footer size and sidebar size. There was a suggestion to use id in selectors since these elements appear on the web page only once. Somehow there’s a contradictory idea: not to use id selectors in styles but to use classes. It’s up to you.
  3. Modules rules - modules styles, i.g. blocks styles that can be used on the web page several times. It’s not recommended to use id and tags selectors for modules classes.
  4. State rules - states styles. In this part different modules’ conditions and a site’s structure are defined. It’s the only subsection where a key word «!important» is allowed to use.
  5. Theme rules - design. Design styles are described in this subsection. They could be changed if needed.

Atomic design wraps up this block.

Atomic Design follows a component thinking: it breaks the whole website into components to use them throughout the site, in different locations. The site that adopted that philosophy is easier to introduce to a new developer, at least a codebase will be clear.

All website design can be divided into 5 levels: atoms, molecules, organisms, templates, pages. Atoms are basic building bricks, like buttons. Together they form the molecules that make a difference to website’s performance: for example, a set of buttons becomes a contact form.

The molecules, in their turn, create a particular subsection of the site: header, footer, sidebar, etc. These molecules combinations are organisms. Several organisms together form a template and that’s what you can show to a client.

The final stage - pages - is the templates filled with a real content.

Atomic design process totally makes sense because system assembling is more time savvy than a typical design process: both the client and the designer see the system creation step by step and there’s no need to deconstruct the whole page if the client doesn’t like the design offered.

When it comes to development, the same approach can be applied here as well. It makes code more consistent and clear. This way, you don’t have to write the same elements again and again, you just go through the atoms library and copy the code.

Liked Atomic Design? Bear in mind that you’d better build the website applying this approach from the very beginning than adjust that website afterwards. More than that, Atomic Design suits big projects better than other approaches.

Great, we’ve done with slicing. The next step is moving a template from a design tool to the website.

DESIGNER-DEVELOPER COLLABORATION TOOLS

Ladies and gentlemen, let me present two important tools for such purposes: Zeplin and Avocode. Zeplin and Avocode are aimed at developers. Also if one has either Avocode or Zeplin, there’s no need in Photoshop and Sketch. When it comes to styles extracting, Avocode and Zeplin do it better than Adobe Extract (that is used by Adobe Brackets for this purpose). And since the Agile approach is catching up fastly we have to think how to optimize communication within the team.

Zeplin is an app that collects all aspects of design elements into a sheet of specs: it simplifies a handoff between design and development. In other words, it turns a design into a code: Zeplin even takes care of generating assets of all the sizes your project needs, colours, margins and CSS suggestions for certain elements. Also Zeplin exports the assets into LESS, Sass and Stylus. Zeplin keeps all the data in a cloud storage that makes it available for contribution for all team members.

And it works not only for Sketch designs, but there’s a plugin for Photoshop already. Get together and make your layout pixel-perfect!

By the way, if you work in Sketch and have to convert your design asset immediately, use a Marketch plugin: it generates a HTML-page automatically so a developer can see CSS styles.

Avocode is another collaboration tool. It allows to handle .psd and .sketch designs to the developer. The process is pretty simple: Avocode plugin let designer stay in Sketch or Photoshop and make the design accessible to all team members, since Avocode runs in the cloud system.

Avocode detects all font styles, font sizes and weights and transform them into CSS. What else Avocode does is scaling up the vector shapes, converting colors to the code and measuring a distance.

With the help of Avocode a front-end developer may simply copy an HTML-code from the assets and generate CSS and Sass out of .psd and .sketch files.

Web manager and a desktop application are available.

The tangible drawbacks are:

  • Avocode doesn’t track minor design changes, only global ones, like revisions, and comments notifications.
  • No free packages.
  • Not aimed at iOS and Android. Look up Sympli or Zeplin for that purpose.

We cannot omit Figma - a dark horse of the communication tools market. Alike previous two, Figma runs in the cloud system. It is a browser based Photoshop that lets make changes in no time. Figma has version control so developers and designers can rewind project to any stage back. Also this tool allows one to see how the design will look at mobile devices, laptops and so on. We encourage you to discover Figma features on your own with the help of the elaborated (even keyboard shortcuts are included!) guide placed at the official website.

Last but not least - Adobe Extract. It’s not a collaboration tool, still it does let you get the specs (colors, fonts, CSS) out of the .psd assets. This application work for the desktop and the mobile devices.

WHAT ELSE?

Now the design is approved, the asset is successfully turned into CSS and structured. What else could be done to optimize a workflow?

Webpack, gulp, npm scripts - these guys are to help you automate the majority of routine tasks and simplify development process. You just set them once and here you go: compile CSS, minimize, check the JavaScript and the CSS code, concatenate all files into one, keep track of files changes, update data automatically...aren’t you tired yet? There are much more functions available!

CONCLUSION

We’ve observed what component-based thinking is, what markup technologies and collaboration tools are trending. Still there is only one skill that ties all the above together.

It’s the willingness to work in group.

A frontend-developer and a designer should know how their duties are interconnected and how they can collaborate efficiently.

Front-end developers should remember that all those fancy fonts, margins and small elements like buttons do deliver a particular function. Designers ought to know each website block depicted like a static one actually will have a dynamic content and will render differently on mobile and on desktop devices. There’s no that enormous amount of information to learn: it would be enough to know about basic things at least. We recommend you to start with the article “How to befriend design and front-end”.

Find out how your duties are interconnected with duties of your colleagues, use the trending tools and approaches we told you about - and get a development process to a new level.

Using Guzzle and PHPUnit for REST API Testing

Cloudflare Blog -

APIs are increasingly becoming the backbone of the modern internet - whether you're ordering food from an app on your phone or browsing a blog using a modern JavaScript framework, chances are those requests are flowing through an API. Given the need for APIs to evolve through refactoring and extension, having great automated tests allows you to develop fast without needing to slow down to run manual tests to work out what’s broken. Additionally, by having tests in place you’re able to firmly identify the requirements that your API should meet, your API tests effectively form a tangible and executable specification. API Testing offers an end-to-end mechanism of testing the behaviour of your API which has advantages in both reliability and also development productivity.

In this post I'll be demonstrating how you can test RESTful APIs in an automated fashion using PHP, by building a testing framework through creative use of two packages - Guzzle and PHPUnit. The resulting tests will be something you can run outside of your API as part of your deployment or CI (Continuous Integration) process.

Guzzle acts as a powerful HTTP client which we can use to simulate HTTP Requests against our API. Though PHPUnit acts as a Unit Test framework (based on XUnit), in this case we will be using this powerful testing framework to test the HTTP responses we get back from our APIs using Guzzle.

Preparing our Environment

In order to pull in the required packages, we’ll be using Composer - a dependency manager for PHP. Inside our Composer project, we can simply require the dependencies we’re after:

$ composer require phpunit/phpunit $ composer require guzzlehttp/guzzle $ composer update

When we ran composer require for each of the two packages, Composer went ahead and actually downloaded the packages we want, these are stored in the vendor directory. Additionally when we ran composer update, Composer updated it’s PSR-4 autoload script that allows us to pull in all the dependencies we’ve required with one file include, you can find this in vendor/autoload.php.

With our dependencies in place, we can now configure PHPUnit to use Guzzle. In order to do this, we need to tell PHPUnit where our Composer autoload file is, but also where our tests are located. We can do this through writing a phpunit.xml in the root directory of our project:

<?xml version="1.0" encoding="UTF-8"?> <phpunit bootstrap="vendor/autoload.php"> <testsuites> <testsuite name="REST API Test Suite"> <directory suffix=".php">./tests/</directory> </testsuite> </testsuites> </phpunit>

In the XML above, the two noteworthy elements are the opening phpunit tag (which defines with a bootstrap property where our Composer autoload script is), additionally we have a testsuite element which defines our test suites (and a child directory property to define where the specific tests live). From here, we can just add an empty directory called tests for our tests to reside in.

If we now run PHPUnit (through the command ./vendor/bin/phpunit), we should see an output similar to the one I get below:

With our environment defined, we’re ready to move on to the next step. First; purely for the sake of convenience, I’ve added a shortcut to my composer.json file so that when I run composer test it will point to ./vendor/bin/phpunit. You can do this by adding the following JSON to your composer.json file:

"scripts": { "test": "./vendor/bin/phpunit" } Writing our Tests

As an example, I’ll be writing tests against an endpoint at httpbin.org. The first test I’ll write will be against the /user-agent endpoint, so I’ll create a file called UserAgentTest.php, be sure to extend the PHPUnit_Framework_TestCase class:

<?php class UserAgentTest extends PHPUnit_Framework_TestCase { }

Before each test, PHPUnit will run the setUp method and after the test has executed it will run the tearDown method in the class (if they exist). By utilising these methods we can instantiate our Guzzle HTTP client ready for each test and then return to a clean slate afterwards:

<?php class UserAgentTest extends PHPUnit_Framework_TestCase { private $http; public function setUp() { $this->http = new GuzzleHttp\Client(['base_uri' => 'https://httpbin.org/']); } public function tearDown() { $this->http = null; } }

Note that if you feel even more adventurous, you can utilise environment variables (through the getenv method) to set the baseurl - for this tutorial however, I’ll be keeping things simple.

With our setUp and tearDown methods in place, we can now go ahead and actually create our test methods. As I’ll start off by testing against the GET HTTP verb, I’ll name the first test method testGet. From here, we can make the request and then check the properties we get back.

public function testGet() { $response = $this->http->request('GET', 'user-agent'); $this->assertEquals(200, $response->getStatusCode()); $contentType = $response->getHeaders()["Content-Type"][0]; $this->assertEquals("application/json", $contentType); $userAgent = json_decode($response->getBody())->{"user-agent"}; $this->assertRegexp('/Guzzle/', $userAgent); }

In the method above, I’ve made a GET request to the user-agent endpoint. I can then check the response code I get back was indeed 200 using the first assertion. The next assertion I test against is whether the Content-Type header indicates the response is JSON. Finally I check that the JSON body itself actually contains the phrase “Guzzle” in the user-agent property.

We can add additional assertions as required, but we can also add additional methods for other HTTP verbs. For example, here’s a simple test to see if I get a 405 status code when I make a PUT request to the /user-agent endpoint:

public function testPut() { $response = $this->http->request('PUT', 'user-agent', ['http_errors' => false]); $this->assertEquals($response->getStatusCode(), 405); }

Next time we run PHPUnit, we can see if our tests pass successfully and also get insight into some statistics surrounding the execution of these tests:

Conclusion

That’s all there is to this simple approach to API testing. If you want some insight into the overall code, feel free to review the project files in the Github repository.

If you find yourself using this testing set-up, be sure to review the Guzzle Request Options to learn what kind of HTTP requests you can run with Guzzle and also check out the types of assertions of you can run with PHPUnit.

Drop Guard: New security challenges arise

Planet Drupal -

Two days ago another highly critical security update affected Drupal and many other CMS systems. It was the PHPMailer Library which leaves millions of websites vulnerable to the remote exploit (see https://www.drupal.org/psa-2016-004 for details). In comparison to Drupalgeddon which had a risk of 25/25 this update has 23/25. BUT there are some things which make this update even riskier than Drupalgeddon:

Drupal Planet Security announcements Drupal PHP

Cheeky Monkey Media: Making Your Online Properties Fast and Efficient with AMP (and Monkey Flinging Amplification)

Planet Drupal -

Making Your Online Properties Fast and Efficient with AMP (and Monkey Flinging Amplification) micah Tue, 12/27/2016 - 20:20 AMP (Accelerated Mobile Pages)

The official definition:

The Accelerated Mobile Pages (AMP) Project is an open source initiative that embodies the vision that publishers can create mobile optimized content once and have it load instantly everywhere.

Our definition:

A way we can make our websites fling poo load and function as quickly and efficiently as possible in order to provide a much better mobile user experience.

Why Do We Need AMP?

Mobile devices... Mobile devices just aren’t as fast desktop devices. Arguable, sure, when you’re using your mobile device to load a basic website over an ultra fast internet connection. That internet highway is not always as clear on a sunny day, though. You may not have LTE, 4G, 3G, 4 bars, 2 bars. You may have the crappiest connection known to mankind. But you still simply want that internets, cause you needs it, it’s your lifeline, your precioussss.

FFW Agency: Building Platforms for Millions with NBC Sports Digital

Planet Drupal -

Building Platforms for Millions with NBC Sports Digital leigh.anderson Tue, 12/27/2016 - 20:17

We had a lot of great accomplishments at FFW in 2016. It’s been a year where we’ve helped our clients shatter records and drive amazing business results. A great example of this is NBC Sports Digital, with whom we’ve collaborated with on a number of projects: we built new Drupal websites for NBCSports.com and its regional RSN networks, and we also constructed NBCOlympics.com, part of the most successful media event in history.

 

Building a Hub for America’s Sports Fans - NBCSports.com and Regional Sites


NBC Sports Digital asked us to implement a redesign of their digital sports hubs (NBCSports.com and Sports Regional Networks websites) before the start of the NFL season. They asked us specifically to focus on videos and advertising, to increase user and sponsor satisfaction.

Our team built each page according to NBC’s new designs, migrated the network’s data onto a scalable Drupal platform, and developed the new site to have custom layouts, themes, and functionality. We also implemented responsive design, ensuring the site looks great on any device. We were able to launch NBC Sports Digital’s new site before the NFL season kickoff with full video functionality and improved advertising, and the response has been great.

In addition to rebuilding NBCSports.com, we also rebuilt the websites for NBC Sports Group’s Regional Networks. NBC wanted its regional sites to have a consistent look and feel, so we set up a multisite project with all the same backend code. Our team implemented several feed customizations for each site so each different region would have unique content. The result is an ecosystem of regional sites with custom content and easy-to-maintain shared codebase—another win for NBC!

Visit the site:
NBCSports.com

 

NBCOlympics.com: A Gold Medal in Site Performance


In order to provide a best-in-class digital experience for NBC Olympics’ coverage of the 2016 Rio Games, NBCOlympics.com required a massive platform. When the opportunity arose for FFW to build the web platform to deliver performance, security, and stability at such a massive scale, we were excited to step up to the challenge. During the Rio Olympics, NBCOlympics.com hosted 3.3 billion total streaming minutes, 2.71 billion live streaming minutes, and was visited by 100 million unique users.

A thorough discovery phase allowed us to plan for the implementation of the extremely complex, and massive, project. After a year and a half of work, we launched the completed site in April of 2016. We also built in a state-of-the-art advertising platform, which helped NBC Sports to manage their sponsors’ content. The new platform served up content to a record number of users, who were able to view localized listings of broadcasts and watch live streams of the events from any device. 

We couldn’t be more proud of our teams who worked on all three sites, and we’d like to say thanks to NBC for letting us be part of such a historic event.

 

NBC Olympics has set the new benchmark for how to build and operate a Drupal-based CMS in a multi-platform world.Eric Black, CTO, NBC Sports Digital


Visit the site:
NBCOlympics.com

Comments

Agiledrop.com Blog: AGILEDROP: On which Drupal Camp to go?

Planet Drupal -

As mentioned in our interview with Janez Urevc there are too many Drupal events, which are internationally oriented. DrupalCons grow each year and so does Drupal Camps. It's hard for organizers to attract visitors, because Drupalistas can't travel every weekend. Knowledge of where to expand your Drupal skills is, therefore, a key thing. In the past few weeks, we have written many blog posts about Drupal Camps for you. Now, we give you an overview of the findings in our Drupal Camp world tour, so that your decisions which Drupal Camps to visit will be easier. After reading about and… READ MORE

myDropWizard.com: Drupal 6 workaround for the highly critical vulnerability in PHPMailer

Planet Drupal -

You may have noticed that CVE-2016-10033 came out yesterday, which discloses an Remote Code Execution (RCE) vulnerability in the PHPMailer library which is used by popular contrib modules like SMTP or PHPMailer.

This is a highly critical vulnerability because Remote Code Execution means an attacker can run arbitrary code on your server!

The Drupal Security team just made a PSA today: DRUPAL-PSA-2016-004

The real, full fix is to update the PHPMailer library to version 5.2.19 or later, or if you use the SMTP module version 7.x-1.5 or lower, to update to SMTP 7.x-1.6 (because SMTP 7.x-1.x embeds the library in the module).

However, if you're using Drupal 6, you probably have an old version of PHPMailer (5.1 or lower), and newer versions may not be compatible with the code on your site (either custom or contrib). Attempting an update in the middle of the holidays when not everyone is available to test or deal with follow-up issues might not be the best idea.

So, what we're recommending (and what we've already done for our customers) is removing the vulnerable feature from the PHPMailer library.

The vulnerability is in PHPMailer support for sending mail via the 'sendmail' command-line application. However, odds are you using PHPMailer exclusively for sending via SMTP (like the SMTP and PHPMailer modules do). So, you can just delete the code for that feature!

Here's how... Open the class.phpmailer.php file, and delete:

So you want to expose Go on the Internet

Cloudflare Blog -

This piece was originally written for the Gopher Academy advent series. We are grateful to them for allowing us to republish it here.

Back when crypto/tls was slow and net/http young, the general wisdom was to always put Go servers behind a reverse proxy like NGINX. That's not necessary anymore!

At Cloudflare we recently experimented with exposing pure Go services to the hostile wide area network. With the Go 1.8 release, net/http and crypto/tls proved to be stable, performant and flexible.

However, the defaults are tuned for local services. In this articles we'll see how to tune and harden a Go server for Internet exposure.

crypto/tls

You're not running an insecure HTTP server on the Internet in 2016. So you need crypto/tls. The good news is that it's now really fast (as you've seen in a previous advent article), and its security track record so far is excellent.

The default settings resemble the Intermediate recommended configuration of the Mozilla guidelines. However, you should still set PreferServerCipherSuites to ensure safer and faster cipher suites are preferred, and CurvePreferences to avoid unoptimized curves: a client using CurveP384 would cause up to a second of CPU to be consumed on our machines.

&tls.Config{ // Causes servers to use Go's default ciphersuite preferences, // which are tuned to avoid attacks. Does nothing on clients. PreferServerCipherSuites: true, // Only use curves which have assembly implementations CurvePreferences: []tls.CurveID{ tls.CurveP256, tls.X25519, // Go 1.8 only }, }

If you can take the compatibility loss of the Modern configuration, you should then also set MinVersion and CipherSuites.

MinVersion: tls.VersionTLS12, CipherSuites: []uint16{ tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, // Go 1.8 only tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, // Go 1.8 only tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, // Best disabled, as they don't provide Forward Secrecy, // but might be necessary for some clients // tls.TLS_RSA_WITH_AES_256_GCM_SHA384, // tls.TLS_RSA_WITH_AES_128_GCM_SHA256, },

Be aware that the Go implementation of the CBC cipher suites (the ones we disabled in Modern mode above) is vulnerable to the Lucky13 attack, even if partial countermeasures were merged in 1.8.

Final caveat, all these recommendations apply only to the amd64 architecture, for which fast, constant time implementations of the crypto primitives (AES-GCM, ChaCha20-Poly1305, P256) are available. Other architectures are probably not fit for production use.

Since this server will be exposed to the Internet, it will need a publicly trusted certificate. You can get one easily and for free thanks to Let's Encrypt and the golang.org/x/crypto/acme/autocert package’s GetCertificate function.

Don't forget to redirect HTTP page loads to HTTPS, and consider HSTS if your clients are browsers.

srv := &http.Server{ ReadTimeout: 5 * time.Second, WriteTimeout: 5 * time.Second, Handler: http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) { w.Header().Set("Connection", "close") url := "https://" + req.Host + req.URL.String() http.Redirect(w, req, url, http.StatusMovedPermanently) }), } go func() { log.Fatal(srv.ListenAndServe()) }()

You can use the SSL Labs test to check that everything is configured correctly.

net/http

net/http is a mature HTTP/1.1 and HTTP/2 stack. You probably know how (and have opinions about how) to use the Handler side of it, so that's not what we'll talk about. We will instead talk about the Server side and what goes on behind the scenes.

Timeouts

Timeouts are possibly the most dangerous edge case to overlook. Your service might get away with it on a controlled network, but it will not survive on the open Internet, especially (but not only) if maliciously attacked.

Applying timeouts is a matter of resource control. Even if goroutines are cheap, file descriptors are always limited. A connection that is stuck, not making progress or is maliciously stalling should not be allowed to consume them.

A server that ran out of file descriptors will fail to accept new connections with errors like

http: Accept error: accept tcp [::]:80: accept: too many open files; retrying in 1s

A zero/default http.Server, like the one used by the package-level helpers http.ListenAndServe and http.ListenAndServeTLS, comes with no timeouts. You don't want that.

There are three main timeouts exposed in http.Server: ReadTimeout, WriteTimeout and IdleTimeout. You set them by explicitly using a Server:

srv := &http.Server{ ReadTimeout: 5 * time.Second, WriteTimeout: 10 * time.Second, IdleTimeout: 120 * time.Second, TLSConfig: tlsConfig, Handler: serveMux, } log.Println(srv.ListenAndServeTLS("", ""))

ReadTimeout covers the time from when the connection is accepted to when the request body is fully read (if you do read the body, otherwise to the end of the headers). It's implemented in net/http by calling SetReadDeadline immediately after Accept.

The problem with a ReadTimeout is that it doesn't allow a server to give the client more time to stream the body of a request based on the path or the content. Go 1.8 introduces ReadHeaderTimeout, which only covers up to the request headers. However, there's still no clear way to do reads with timeouts from a Handler. Different designs are being discussed in issue #16100.

WriteTimeout normally covers the time from the end of the request header read to the end of the response write (a.k.a. the lifetime of the ServeHTTP), by calling SetWriteDeadline at the end of readRequest.

However, when the connection is over HTTPS, SetWriteDeadline is called immediately after Accept so that it also covers the packets written as part of the TLS handshake. Annoyingly, this means that (in that case only) WriteTimeout ends up including the header read and the first byte wait.

Similarly to ReadTimeout, WriteTimeout is absolute, with no way to manipulate it from a Handler (#16100).

Finally, Go 1.8 introduces IdleTimeout which limits server-side the amount of time a Keep-Alive connection will be kept idle before being reused. Before Go 1.8, the ReadTimeout would start ticking again immediately after a request completed, making it very hostile to Keep-Alive connections: the idle time would consume time the client should have been allowed to send the request, causing unexpected timeouts also for fast clients.

You should set Read, Write and Idle timeouts when dealing with untrusted clients and/or networks, so that a client can't hold up a connection by being slow to write or read.

For detailed background on HTTP/1.1 timeouts (up to Go 1.7) read my post on the Cloudflare blog.

HTTP/2

HTTP/2 is enabled automatically on any Go 1.6+ server if:

  • the request is served over TLS/HTTPS
  • Server.TLSNextProto is nil (setting it to an empty map is how you disable HTTP/2)
  • Server.TLSConfig is set and ListenAndServeTLS is used or
  • Serve is used and tls.Config.NextProtos includes "h2" (like []string{"h2", "http/1.1"}, since Serve is called too late to auto-modify the TLS Config)

HTTP/2 has a slightly different meaning since the same connection can be serving different requests at the same time, however, they are abstracted to the same set of Server timeouts in Go.

Sadly, ReadTimeout breaks HTTP/2 connections in Go 1.7. Instead of being reset for each request it's set once at the beginning of the connection and never reset, breaking all HTTP/2 connections after the ReadTimeout duration. It's fixed in 1.8.

Between this and the inclusion of idle time in ReadTimeout, my recommendation is to upgrade to 1.8 as soon as possible.

TCP Keep-Alives

If you use ListenAndServe (as opposed to passing a net.Listener to Serve, which offers zero protection by default) a TCP Keep-Alive period of three minutes will be set automatically. That will help with clients that disappear completely off the face of the earth leaving a connection open forever, but I’ve learned not to trust that, and to set timeouts anyway.

To begin with, three minutes might be too high, which you can solve by implementing your own tcpKeepAliveListener.

More importantly, a Keep-Alive only makes sure that the client is still responding, but does not place an upper limit on how long the connection can be held. A single malicious client can just open as many connections as your server has file descriptors, hold them half-way through the headers, respond to the rare keep-alives, and effectively take down your service.

Finally, in my experience connections tend to leak anyway until timeouts are in place.

ServeMux

Package level functions like http.Handle[Func] (and maybe your web framework) register handlers on the global http.DefaultServeMux which is used if Server.Handler is nil. You should avoid that.

Any package you import, directly or through other dependencies, has access to http.DefaultServeMux and might register routes you don't expect.

For example, if any package somewhere in the tree imports net/http/pprof clients will be able to get CPU profiles for your application. You can still use net/http/pprof by registering its handlers manually.

Instead, instantiate an http.ServeMux yourself, register handlers on it, and set it as Server.Handler. Or set whatever your web framework exposes as Server.Handler.

Logging

net/http does a number of things before yielding control to your handlers: Accepts the connections, runs the TLS Handshake, ...

If any of these go wrong a line is written directly to Server.ErrorLog. Some of these, like timeouts and connection resets, are expected on the open Internet. It's not clean, but you can intercept most of those and turn them into metrics by matching them with regexes from the Logger Writer, thanks to this guarantee:

Each logging operation makes a single call to the Writer's Write method.

To abort from inside a Handler without logging a stack trace you can either panic(nil) or in Go 1.8 panic(http.ErrAbortHandler).

Metrics

A metric you'll want to monitor is the number of open file descriptors. Prometheus does that by using the proc filesystem.

If you need to investigate a leak, you can use the Server.ConnState hook to get more detailed metrics of what stage the connections are in. However, note that there is no way to keep a correct count of StateActive connections without keeping state, so you'll need to maintain a map[net.Conn]ConnState.

Conclusion

The days of needing NGINX in front of all Go services are gone, but you still need to take a few precautions on the open Internet, and probably want to upgrade to the shiny, new Go 1.8.

Happy serving!

Dries Buytaert: TAG Heuer using Drupal

Planet Drupal -

Growing up my dad had a watch from TAG Heuer. As a young child, I always admired his watch and wished that one day I'd have one as well. I still don't have a TAG Heuer watch, however I just found out that TAG Heuer relaunched its website using Drupal 8 and that is pretty cool too.

TAG Heuer's new website integrates Drupal 8 with their existing Magento commerce platform to provide a better, more content-rich shopping experience. It is a nice example of the "Content for Commerce"-opportunity that I wrote about a month ago. Check it out at https://www.tagheuer.com!

J-P Stacey: Drupal Global Sprint weekend hits Sheffield on Jan 28/29

Planet Drupal -

Last year I attended the Drupal Global Sprint's local event in Leeds, and this year Drupal Yorkshire comes to Sheffield! Everyone of any level of experience will be welcome at the Sheffield event, on the weekend of Jan 28/29, to come along and help out with the wider, open-source Drupal project.

Read more of "Drupal Global Sprint weekend hits Sheffield on Jan 28/29"

Pages

Subscribe to Cruiskeen Consulting LLC aggregator