For a few days now, articles have been flourishing around the tweet by Chris Peterson , technical program manager at Mozilla, announcing that loading the YouTube page is 5 times slower in Firefox and Edge than in Chrome. In this article, we alert you to the methodology flaws in performance measurements and take the opportunity to take a step back from the underlying question of web standards.
How to effectively measure web performance?
On July 24, Chris Peterson, Mozilla’s technical program manager, wrote on Twitter:

This is said to be due to ” YouTube’s Polymer (an open source JavaScript library) rework [which] relies on Shadow’s deprecated DOM v0 API .” This API is said to be only in Chrome, and the loading time of a YouTube page is said to be impacted by 1 to 5 seconds depending on whether you are using Chrome or Firefox and Edge.
Performance measurement alert point
If we look at his measurement method described a little further down in his tweets, we see that the tests were really not carried out according to the rules of the art:

” The “5x difference” I measured (5 seconds vs. 1 second) was the time it took to replace the gray placeholder boxes with content (comments and thumbnails) on my Mac and Windows laptops and on the 200 Mbps Internet yesterday. Your results may vary depending on your computer and network speed. “
To get more reliable results, Chris should not have performed these tests on his workstation. He should have used:
- a Synthetic Monitoring tool: tests are run on servers from data centers using a throttled connection to simulate conditions that an average user might encounter. Web pages are actually loaded from a web browser to collect performance metrics that match the real user experience.
- or better yet, use a Real User Monitoring (RUM) tool: measurements are no longer carried out at a given moment but continuously. A JavaScript code injected into each web page makes it possible to measure the page loading times for each request.
Learn more about Synthetic Monitoring vs Real User Monitoring >>
Launching the test on a Synthetic Monitoring tool: Webpagetest
Chrome Results | Firefox Results | |
Test conditions | URL: https://www.youtube.com From: Paris Navigator: Chrome Connexion: Average French DSL Run: 3 | URL: https://www.youtube.com From: Paris Navigator: Firefox Connexion: Average French DSL Run: 3 |
Test links | Test Chrome | Test Firefox |
Performances | Load time: 2,831s First Byte: 0,359s Start Render: 1,300s Speed Index: 2136 | Load time: 4,082s First Byte: 0,548s Start Render: 2,500s Speed Index: 3165 |
These initial results (which would deserve to be confirmed by another battery of tests) show that YouTube would indeed load much less quickly on Firefox than on Chrome.
But this debate nevertheless encourages us to take a step back from the use of APIs that are non-standard.
The risk of using non-standard APIs
What is a standard and how does it come about?
Web standards are a common base that ensures the consistency of the code that makes up a web page and even the entire internet.
These standards are defined by standardization organizations, such as the W3C or the IETF. These web standards and norms ensure compatibility, but also the evolutions of the web.
The birth of a standard theoretically goes through several stages:
- Creating a specification
- The development
- A repeat of reviews by the Internet community
- Revisions following feedback
- It is then adopted as a standard by the appropriate body and is the subject of an RFC.
But the reality is not so simple because of the difficulty of creating specifications of high technical quality, the need to take into account the interests of all stakeholders, the importance of obtaining broad community consensus, and the difficulty of assessing the usefulness of a particular specification to the Internet community.
Often, standards arise from private initiatives. Google, for example, is very active in proposing them since they have a lot of data.
This is also what partly gave birth to HTTP/2:
Google unveiled the SPDY protocol in 2009: a protocol for reducing the loading time of web pages by ordering files according to their priority and optimizing their transfer, so that these files are transmitted in a single connection (this is called multiplexing). SPDY was then used as the basis for HTTP /2 .
The same goes for Quic . Born from a Google initiative, it was then largely improved by the community and was standardized by the IETF . We now speak of gQuic for the initial Google implementation and Quic for the community-improved version.
But sometimes some of these initiatives are not standardized. This is particularly the case with WebP, an image format designed by Chrome that offers better compression without loss of quality. However, WebP has not become a standard.
We could also cite the example of AMP (Accelerated Mobile Pages), once again imagined by Google, which is widely questioned by the community and which also does not result in standardization.
Standards thus coexist with developments specific to browsers, which leads to divergences, particularly in performance, as in the case of YouTube discussed above.
” It’s good that there are initiatives to change things. But it’s a shame that there is this divergence and that the developments come mainly from private initiatives. We end up improving performance for a given browser. This doesn’t seem to me to be a good thing. Especially since not everyone has the same weight (Microsoft vs Apple or Google for example). Microsoft has pushed image formats that have never taken off. Ideally, these initiatives should be born within standardization organizations. ” Stéphane Rios, CEO of Fasterize.