There are two schools of thought to measure web performance: Synthetic Monitoring and Real User Monitoring (RUM). Even though they are often opposed, the two methods are complementary. This article reviews the two types of monitoring. What is it all about? What are they used for? How to use them effectively? What are the pitfalls to avoid?
Synthetic Monitoring (aka Active monitoring)
Tests have been run on data centers’ servers using a limited connection to simulate the conditions that an average user may experience. Web pages are actually loaded from a web browser to collect performance metrics that match the user experience reality.
- Possibility to test one scenario several times, thus better compare the results (thanks to the script of the scenarios and the systematic cleaning of the cache)
- Control of test conditions
- Nothing to install
Data Collected with Synthetic Monitoring
Synthetic Monitoring makes it possible to collect several indicators:
- Waterfall and HTTP headers:
They allow to analyse the loading of each element of the page through time. These graphs are particularly interesting for determining blocking elements.
Thanks to the filmstrip, the loading of the page is no longer represented with numbers or graphics, but with images. It allows you to see precisely what the user sees on his screen every second.
- Network metrics (requests, page weight, global load time …)
How to make the most of Synthetic Monitoring?
Collecting data is important, but you still need to know how to exploit it. Synthetic Monitoring makes it possible to point out difficulties in the loading of the pages, but it also can be used on a broader basis.
Synthetic Monitoring may also be benchmarked in order to compare with its competitors.
Another way of comparing is the monthly ranking of the JDN.
Synthetic Monitoring makes it possible to establish and manage a Performance Budget. The idea is to fix a technical threshold not to exceed. For this, you need to define the metrics that seem most relevant and fix a threshold not to exceed (for example: web pages must not exceed x ko, Start Render must absolutely be below 2 seconds, etc.).
To set up a Performance Budget, it’s important that every person contributing to the evolution of your website is included into the discussion. Make sure that everyone has the same level of knowledge of the issue before you engage the team in any decision making.
Reporting of SPOF (Single Point Of Failure)
Synthetic Monitoring tools also have the advantage of pointing out SPOFs and identifying their impact on user experience.
Real User Monitoring (aka Passive monitoring)
As for the RUM, measures are no longer carried out at a specific moment but continuously.
- Analysis of the website traffic
- Continuous measuring
- Results take into account all users, whatever their browser, their level of connection (ADSL, 3G, Edge, etc.) or their place of connection.
- RUM have the ability to capture human behaviour / events via performance data.
Thanks to the RUM, you no longer need to pre-define the different use cases that may occur.
Data collected with RUM
- Navigation Timing: RUM tools take advantage of the Navigation Timing API available on recent browsers. Here is all the available data:
This API is not integrated on Safari (IOS <9) and IE <= 8. To know which browser is compatible with the API Navigation Timing, click here.
- The Resource Timing: this other API allows to measure in detail the loading times of each static resource.
- Conversion rates and other business data: some RUM tools can correlate loading times with conversion rates. This is especially the case of Google Analytics, Webperf.io, Cedexis.
Synthetic Monitoring and RUM do not have the same indicators, but they don’t oppose each other. For example, Synthetic Monitoring allows you to evaluate a company in relation to its competitors, while RUM allows to relate with the customer experience. Thus the two methods complement each other. There are however some pitfalls to avoid.
Pitfalls to avoid
The results presented by these different tools are difficult to compare. Overall, each measure has its own error rate.
The speed reported by RUM results from a combination of first view and repeat view in very different contexts. On the contrary, with a Synthetic Monitoring tool, the measurements are generally made on a first view in an established (laboratory) context. Eventually the results are different. All results are correct in their given context.
Steve Souders, web performance guru, shared a small study on his website. It shows how data from RUM and Synthetic monitoring are not comparable.
|Chrome 23||Firefox 16||IE 9|
|Synthetic First View (secs)||4.64||4.18||4.56|
|Synthetic Repeat View (secs)||2.08||2.42||1.86|
|Synthetic 50/50 (secs)||3.36||3.30||3.21|
|RUM data points||94||603||89|
This study highlights two checkpoints that require special attention:
- RUM and Synthetic Monitoring may lead to very different results. This doesn’t mean the results are wrong (or right), they simply come from two different tools. One should only compare what is comparable!
- Be careful not to analyse only Synthetic Monitoring results. The actual user experience may be a bit slower as shown by the figures above.
The loading speed of a website has many different aspects. It cannot be defined by just one figure:
The advantage of this plurality of figures is that it makes sense to several of your interlocutors.
Your E-commerce Manager will be sensitive to business gains, while your SEO expert will be more interested in the PageSpeed score or the TTFB and your technical director will react to the time spent for SSL trading or ad tags.
Each one at his level has his own references and requirements. For a website to be performant, it is necessary that each of the players becomes a driving force in terms of web performance.