Skip to main content

Case Study: Drupal Performance Testing

Reading time: 11 minutes
Jesse Mortenson
Written by
Jesse Mortenson
Developer
Drupal logo

Recently, a client came to us asking for help with performance problems. The reported problem was simple: the homepage loaded really slowly on mobile devices! As developers, we know that bug reports frequently hide a lot of complexity. Performance problems are no different, and they can fool us, too. Site performance issues can implicate as much as, oh, ALL of the code.

So, when we see a slow page, what do we try to fix? In this situation, we may be tempted to just use our favorite hammer and see everything as a nail, but the real challenge is how to address specific underlying causes and make changes that will make a concrete difference, instead of blindly hammering away.


The Project

The client's site is an online news outlet focused on a particular topic/community. It features news articles, polls, image galleries, and a few types of reviews of community resources. Its revenue comes from advertising, particularly from vendors that cater to this community of people.

This brings us to our first set of complications: the client wants a dynamic, complex homepage to attract and retain eyeballs; and also wants to display as many ads as possible. It turns out ads are bad for performance! But I'll save the debate about the merits of the internet ad economy for another day.

The Basics

First things first. There are a few things for which we should do blind checks, right off the bat. This is a Drupal 7 site. Is site caching turned on? Are all Views being cached? Is the database being used as the cache backend, or something fast (like Memcache or Redis)? Are most users anonymous (using caching) or logged-in (possibly not using caching)?

Before digging deeper, we did a sanity check for the low-hanging fruits that are known, common pitfalls with Drupal sites. Here are some additional recommendations.

In this case, most visitors to the site were anonymous. There wasn't a lot of interactivity, and so there was not much chance that visitor behavior was triggering a bunch of costly operations based on varying queries.

The site did have caching enabled and was using a Redis backend caching. Redis is fast, so I'd expect it to be a pretty swift solution. I actually spent a little time digging into it, which was probably a bad use of time. If a speedy cache backend is in place and enabled, it's probably not the best place to start with optimization work and analysis. However, there were two Views that had caching disabled. I turned it on for those, and also enabled a sitewide minimum cache lifetime of five minutes.

Take a Step Back

Now's the time to take a deep breath. Don't start digging into the guts of Views queries or the MySQL slow query log. Don't send the client a recommendation to double the number of server nodes (well, unless the site is literally melting down at that moment). Depending on your professional background, you may be familiar with certain sets of performance problems to watch out for, and strategies to mitigate them. But now is not the time to start hammering away. Now is the time to measure.

Page Performance Hides a Lot

The web applications we write today are complicated. Content Management Systems and Platforms-as-a-Service can open up a lot of power to folks who do not have the time or resources to build that power themselves. But just because it's easier to deliver functionality doesn't mean the underlying process of building, sending, and rendering that webpage is simpler. A lot of steps are hidden behind a slow page load. So, which one is going to give the most value per hour of optimization work?

The answer is to start using tools to measure what's happening. If you don't have measurements, you can't do good performance optimization work, and you can't demonstrate the value of your work, because you have nothing concrete to point to at the end of the project. What if the client says that the site is still slow? "Well, it's fast enough for me" is probably not going to fly.

In this client's case, we have a hint where to focus our efforts: they specifically mentioned mobile device performance. It indicates that the client side may be the key to the problem. If your background is in PHP or other server-side stuff, all the tweaks in the world may not actually solve the problem. Without measuring to confirm where the bulk of the bad performance is occurring, we could waste a lot of time in areas that are relatively unimportant, or already well-optimized.

Measurement Tools

Here are a few measurement tools I used on this project:

  • NewRelic: NewRelic increasingly has competitors, but for PHP-based sites, it's still the best tool to start measuring both server-side and client-side performance, in the context of overall performance. NewRelic breaks both sides into several main components and provides analysis of individual transactions/pages. It doesn't necessarily give you the "why", but at minimum, it will give you helpful direction to the "what." Its Browser component conveniently breaks performance down by browser and mobile vs. desktop.
  • Chrome/Firefox developer tools/Firebug: Essential tools for doing a deep-dive into what is happening on the client side of the website. Keep in mind that using these tools introduces a bias: you're measuring with one particular computer/browser. Don't assume that what happens on your late-model MacBook holds true for other (or even most) site users.
  • Google PageSpeed Insights: A client-side analysis that provides some recommendations, including a mobile-focused report.
  • WebPageTest: A client-side analysis that focuses a little more on the requests involved in loading the page.

What Did We Find?

NewRelic was super helpful in giving us an overview. We found:

Server-side processing did show some slowness. The APM component reported that average requests were taking around 1300ms, which is kinda slow for a site on a robust hosting platform that uses fast caching. My changes were eventually able to get these down to around 620ms.

More importantly, client-side performance was really slow. The Browser component reported that pages did not load completely in visitors' browsers until an average of 15 seconds. Some, especially mobile, visitors were taking 25-30 seconds to fully load a page. Whoa!

This stood out as a clear, high-value target for optimization. I was eventually able to get averages closer to 8-12 seconds. Still not great, but those numbers were somewhat inflated by ads on the site. Those ads are powered by 3rd party networks, which means each visitor's browser is waiting (at some point) for requests to and rendering of resources that we don't control.

What's Taking These Browsers So Long?

Another reason measurement tools are critical: your subjective experience on a particular browser is not representative. The site was loading much more quickly on my desktop browser than shown in the NewRelic averages. Google PageSpeed helped show why. It has a useful concept of the resources that must be loaded to display above-the-fold content.

This is especially important in mobile browsers, where network speed and CPU are often not good. In the case of this client's site, a LOT of resources needed to be loaded and processed by the browser before it could show the content at the very top of the page. Fast on desktop, much slower on mobile. PageSpeed has related documentation with a lot of recommendations in this area: https://developers.google.com/speed/docs/insights/rules

In the case of this client, we found a variety of smaller problems that contributed to the mobile browser page load problem and some solutions:

  • Blocking JavaScript that could be inlined. The browser waits while it loads external scripts in the HEAD of the browser (https://developers.google.com/speed/docs/insights/BlockingJS). Changing JavaScript to be output as inline code can cause problems. I turned this feature on specifically for the homepage because I knew it needed to be as fast as possible and was worth any possible debugging time. The AdvAgg module (https://www.drupal.org/project/advagg) has a feature for inlining JavaScript on particular pages.
  • Limiting the external JavaScript that was being executed. Several third-party JS tools had been added during prior development that were no longer used, or not very important. We stripped these out.
  • Combining requests to external resources. The site was making several requests to the Google Fonts API to load custom fonts. I reduced this to just one request. And then I realized: why make this an external request at all? I used this tool to download the free fonts and changed CSS to load them from the site's server: http://www.localfont.com/
  • Unnecessarily complex HTML. The site was built to be mobile-responsive. However, it exclusively used CSS media queries to selectively hide elements that were not used in the mobile layout. That meant that desktop-only markup was still loaded and processed by the mobile browser. Elements hidden by CSS rules do improve mobile speed, but not as much as omitting those elements altogether. A complex set of HTML elements needed to be processed and rendered before the top of the mobile page could be displayed. Unfortunately, once the site is built, this is not an easy mess to untangle. We didn't end up getting the budget to do the bulk of that work.
  • Many CSS rules were being loaded on each page. Only a fraction of those was needed to style the homepage.
  • Lots of resource requests overall. The browser needed to request around 200 resources overall. That's a lot!

Speeding Up Resource Requests

We reduced some of the browser requests for resources like JavaScript files and CSS files, but the site still required a lot of requests in the form of images. These images were deemed essential by the client as the site needed to be visually enticing. Luckily, these images were already being provided by Drupal's image styling system, so they were appropriately sized and compressed. It was still a lot of requests. So, the question remained, how do we speed that up?

The client was already using a Content Delivery Network (CDN): Amazon's CloudFront. This provides low-latency access to static resources (like images). However, a critical limitation of HTTP 1.1 is that the browser will only request between two and six resources in parallel (ie, at the same time). Even if images are small and cached in a CDN, the browser still is going to load them in small batches until all are loaded. With about 80 images on the page, this was a problem.

We made two changes to help with the situation: domain-sharding and far-future expiration. The HTTP 1.1 limits are on a per-domain basis. By serving static assets from several domains, we allow the browser to do more work in parallel. The CDN module (https://www.drupal.org/project/cdn) can support multiple CDN domains, so we changed CloudFront settings to add static1.domain.com/static2.domain.com/static3.domain.com to the client's CDN account, and enabled those in the CDN module's settings. The CDN module randomly picks among the domains in serving each resource. This is domain sharding. As a result, the browser could chew through those images more quickly.

CDN also supports far-future expiration. Your CDN typically looks for a header that tells it how long it can retain a resource in its edge cache. Images, for example, typically don't change once cached (or if they do, the filename also changes). They're great candidates for being cached a long, long time. That's far-future expiration: don't expire my resources from the cache until far in the future. Expiration for particular file types is best set via your web server's configuration files.

In the client's case, however, we did not have control over the webserver configuration files. The CDN module far-future expiration feature rewrites resource URLs so that they go through a special menu path that ensures the file is sent along with special headers to define cache expiration times far in the future. This mode is a little problematic (it can conflict with AdvAgg in certain ways, for example), but in our case, it was the best option.

CDN Gotchas

One problem with loading resources from a different domain than your website is that you can run into browser security problems. It is no problem to load images from a different domain, but browsers will complain about web fonts, and these fonts won't work.

You need to make sure you have a proper Cross-Origin Resource Sharing (CORS) setup with your CDN domains for those resources to work, which can be tricky. In our case, it was further complicated by a mix of HTTP and HTTPS requests to the site. I ended up excluding web fonts from the CDN setup using the CDN module's advanced settings.

Wrapping Up and Lessons Learned

In the end, we were able to demonstrate some concrete performance gains in our measurements and the client was happy. We didn't make any recommendations that would have substantially increased the client's monthly hosting bills, because we could tell that most of the problem was on the client side.

Of course, some entrenched problems remained that we couldn't address in the short term. If you're building a site in which mobile clients need great performance while also viewing complex, visually-rich content, then performance measurement ought to be part of the development process from the beginning.

A CMS, like Drupal, makes it easy to add lots of features and put them together on a page, but automatic feature-building tools like Views, Panels, and various dynamic HTML plugins for galleries and carousels often tend to output complex HTML.

Any one in isolation will be fine on mobile, but throw a bunch together without incrementally measuring performance, and you won't notice what a mess you're creating. Responsive design needs to be more than just hiding some content and changing column sizes with CSS.

In cases where we are building the site from scratch, we do the due diligence to structure it with performance optimizations in mind and we keep monitoring it as it develops. Even after the project is complete we recommend periodic performance reviews through the life of the site so that it can stay optimally tuned, and ahead of the competition.

Interested in a Website Performance Analysis? Contact Us Now!


About Jesse Mortenson

Jesse Mortenson is a PHP veteran and sometimes Linux systems administrator. He founded a web agency in 2001, and began working with Drupal in 2005. After several happy years as a Drupal expert, he decided to hang up the agency business in 2011. He spent the next four years actually working on the same piece of software for a change, learning quite a bit about enterprise applications. But the rambling life again called his name: in 2015, he hung up his company shoes and once again set out walking the Drupal road. Jesse lives in Minneapolis and listens to a lot of Radio K.

SUBSCRIBE TO OUR BLOG

Subscribe to our blog here. We also regularly share content on LinkedIn, Facebook, X, Instagram, and YouTube.

LISTEN TO OUR PODCAST

Growth Gears is the only podcast that seamlessly bridges the gap between marketing, sales, customer experience, and other teams involved in revenue generation.

JOIN OUR COMMUNITY

Marketing Leaders Connect is the place to connect with professionals like yourself and solve complex marketing challenges together.