Following on from my previous article in which I discussed the key principles behind Magento optimisation, this article will take an in depth look at the tools and metrics that can be used in order to properly measure and optimise the performance of your Magento eCommerce store.
New Relic runs on your web server as a daemon and reports back to an external web-based dashboard allowing you to drill down into you web application speed and identify bottlenecks. It is also now natively supported by Magento 2. We are going to use this as the primary tool for application performance optimisation.
How to set it up:
First of all go over to https://newrelic.com/ and register an account. Once you have the account you will be provided with an account license number. Save this as it will be needed for the next step.
Now you will need to get your hosting provider or system administrator to install the daemon onto your web server or if you are comfortable go ahead and install yourself. Instructions on how to do so can be found here. You will need to provide the license number in order to complete the installation.
Once the daemon has been installed on your server within a few minutes you will start to see data being reported in your account.
Now you can start to look at some of the metrics we previously discussed.
Average application response time
This is one of the first things that you will notice when looking at the control panel. It gives a really good feel for how fast your app is responding on average across all requests. I would generally advise that a Magento store should be responding within an average of around 200-500ms. Anything more and you will need to look more closely at why the average is underperforming. Luckily New Relic provides you all the tools required to identify which response times are bringing down the average.
Not all queries are directly customer facing, for example API calls however the best place to start is to look at the queries that are taking longer than they should and begin to identify why.
You can find your slow queries by navigating to "Transactions" on the main New Relic menu. From here you can drill down into each transaction and look in more detail at the processes behind it. New Relic provides plenty of useful graphs to display the data in a meaningful way. Once you have discovered which processes a particularly slow you can begin to identify issues at a code level and provide the appropriate place to focus in order to start fixing it.
Average MySQL response time
Behind every Magento site is a database (hopefully a MySQL one!). Often when drilling into slow queries you will notice that PHP can often site waiting for the database to provide data and that what can initially seem like a code bottleneck, is in fact a database level issue.
New Relic provides a nice interface that allows you to look into database queries in more detail. From the Database section it provides insights into each individual database query including 'Top database operations by time consumed'. This can be especially useful within a application like Magento where queries in code are often abstracted and wrapped in data models.
Every site is obviously different but some common reasons for application inefficiency are:
- Badly written 3rd party modules
- Inefficient code implementation during build
- Unoptimised Apache, Nginx or MySQL servers
Now that we have the underlying application performing well, let’s look at improving frontend performance of your site.
- It’s free
- It's simple to use
- It allows you to save previous reports
- It gives a very good overview of frontend performance including Google PageSpeed and YSlow
Let’s now revisit some of the metrics that we outlined earlier:
Time To First Byte / Document Interactive Time / Total render time
These three metrics are different but all closely related. They measure load time to the end user all in slightly different ways.
While it is a good overall measure, often the user doesn’t need to wait for the page to be fully loaded in order to feel like it has. Often 3rd party scripts may continue to load in the background well after the page has appeared to have loaded.
The Time To First Byte is a metric that is commonly users when looking at frontend performance. As the name suggests, this is the time it takes for the user’s browser to receive the first byte of data from the web server after requesting it. As such this does encapsulate some of the app response time too.
This is however a bit of a flawed and often misquoted metric for one simple reason: Just because the user has received the first byte of data doesn’t mean they can do anything with the page. For example the first byte might be returned then they have to wait 10s of seconds to receive the full page payload. Which leads us nicely into the next metric DiT.
The Document Interactive Time (DiT) is perhaps the golden egg of frontend performance metrics. It is a measure of how long it takes for the page to become loaded enough in order for the user to be able to interact with it. In this sense it is by far the most meaningful metric to your end users and in my opinion means the most when measuring frontend performance.
With the above covered let's look at some more top level frontend performance metrics that matter.
Page size matter because it is the total amount of data that is downloaded each time the user accesses a page on your site. It is everything including media such as images and videos and fairly predictably, the more data you have the longer your user will be waiting. Based on the industry average GTMetrix advises that you aim for less than 3MB per page.
No of HTTP requests
Another important metric is the number of HTTP requests. The main reason for this comes down to how web browsers handle HTTP requests. Whenever a page is requested the browser opens HTTP connections to the web server.
In basic terms these can be thought of as channels where data is transferred to and from the web server, one per asset. The problem is that with good reason most browsers will only open 2-4 connections concurrently and queue other connections to open once the first ones have been completed. Therefore we get a bit of a queuing system and the more requests we have, the more time it will take to complete the page load.
Magento offers several options to improve this:
- Merging of JS and CSS files: This compiles the files into one, meaning there is just two HTTP connections for these particular assets rather than 30+.
- Native domain sharding support: You can configure multiple domains for different assets so that you can concurrently load data from different domains at once, bypassing the browser limitation.
PageSpeed and YSlow score
GTMetrix also provides you with a group of score from both PageSpeed and YSlow which are Google and Yahoo’s site speed measurement tools.
This will give you a number of different recommendations that will improve the performance of your site in various ways. It includes everything from image optimisations through to web server tweaks.
Each site is different and each fix needs to be applied in a different way at different layers of your stack.
There is no one size fits all solution. Approach the problem scientifically and methodically.
As I mentioned from the outset, it is tempting to apply an arbitrary list of fixes to your site in order to “make it faster“. The reality is that this approach is slapdash and not the most intelligent nor optimal way to improve your site’s performance.
The approach outlined here is far superior simply because it is concerned with the performance issues that are affecting your site directly.
If you are interested in a performance review as a part of a wider Magento code audit then you can find out more here.