What are the benefits of HTTP/3, and how does it work? On a technical level, how does it help improve loading speed? We explain what you need to know about the HTTP/3 protocol.
Let’s quickly summarize the previous episodes. HTTP/2 has enabled advances in web performance compared to HTTP/1, including:
- multiplexing requests within a single TCP connection, i.e. reducing the number of “request – response – handshake ” cycles ( fewer round trips, or RTT = more speed),
- better flexibility in prioritizing the download of static resources, by removing the constraint of linear progression of downloads,
- the HPACK HTTP header compression format ,
- the server push system that allows resources to be preloaded before the client requests them.
Although HTTP/2 was designed with the intention of optimizing page loading speed, the TLS encryption imposed by this protocol (HTTPS) can cause slowdowns ( our engine can correct this). As a result, the expected improvements have not always been there. Furthermore, in some cases, the server push functionality is complex or even impossible to deploy (Chrome has deprecated it ).
A new evolution of the HTTP protocol, HTTP/3 allows you to aim for ever better performance and opens up very interesting perspectives that we will detail. The optimizations compared to HTTP/2 mainly concern the transport and security layer, as we explained in this article .
To understand where these changes occur and with what impact, let’s start by looking at the organization of the different layers of transfer protocols.
From HTTP/1 to HTTP/3: Understanding data transfer and the layers of different protocols

From top to bottom: With HTTP/2, the HTTP layer handles requests and data interpretation, the TLS layer provides security through encryption, the TCP layer ensures reliable data transport and retransmits lost packets, and the IP protocol routes packets from one point to another between devices. For HTTP/3, TCP is replaced by UDP, and the TLS layer is integrated with QUIC ( source: Robin Marx / Smashing Magazine )
The IP layer
This is the most rudimentary network layer, and its specifications date back to 1974. It does not guarantee delivery, nor the integrity of data or the ordering of transmitted packets… In short, it is not reliable.
Also, since 32-bit addresses are limited to 4 billion (the number of websites in the world has largely exceeded this threshold), the IPv6 standard (128 bits) has emerged to bypass this limit, and it imposes an additional control step compared to IPv4. It is adopted at almost 40% in 2022.

Source: Google
The TCP layer
Transmission Control Protocol (TCP) is the layer that ensures reliability . Its specifications also date back to 1974, with an edition in 1981. TCP ensures the various exchanges between server and client, the order of transmission of data packets, the retransmission of lost packets, the acknowledgments of receipt of delivery to the sender, the handshakes and the error control. The number of checks necessary for each round trip ( handshake ) has been optimized with HTTP/2 to improve the loading speed of pages.
LIMITATIONS OF TCP AND REASONS FOR CHOOSING UDP
Before transporting anything, TCP needs an initial handshake to establish a new connection ( handshake ) to ensure that the client and server exist and can exchange data. This requires a complete round trip (RTT) across the network, and depending on the distance to be traveled, the trip can take more than 100 ms. This is the latency .
Also, TCP considers all transported data as a single file, or stream, although these transported files are usually a mixture of data packets from different resources. If a packet within these streams is lost during transport, this is where Head-of-Line blocking occurs . This blocking slows down the overall loading of the web page and impacts the following layers of the protocol.
TCP could evolve with the help of extensions. But unfortunately, these extensions could not be deployed on a large scale because they could not be compatible in all situations, due to the very large diversity of equipment (firewalls, load balancers , routers, caching servers, proxies …).
UDP thus intervenes to circumvent this problem, and to allow the desired developments, since the deployment of optimizations on a large scale is not possible for this layer.
The UDP layer
The User Datagram Protocol (UDP) layer specifications date back to 1980. UDP is used for example for clock synchronization in computer systems, VoIP applications, video streaming, gaming, DNS, and DHCP.
UDP does not provide for handshakes. That is, it does not guarantee the ordering or delivery of data packets, nor does it ensure their integrity and order, because this work is left to the application layer with QUIC which integrates the TLS security layer . HTTP/3 is thus built on top of the basic network stack, rather than rebuilding it .
QUIC (Quick UDP Internet Connection)
QUIC takes over almost all the features of TCP:
- acknowledgements for received packets and retransmission of lost packets,
- connections and handshakes ,
- flow and congestion control to avoid overloads .
It overcomes the limitations of the network layers by relying on the UDP protocol, and it does so without requiring an upgrade of the kernels of Internet-wide systems. It is “simply” implemented in a more intelligent and performant way than TCP to compensate for its slowness, by delegating the reliability and security features to the user space. And to facilitate its deployment, for the reasons of compatibility with the intermediaries that we have mentioned, it is executed on UDP.
Let’s explore the details of this layer and its benefits.
REDUCING THE RISK OF FLOW BLOCKING DURING DATA TRANSMISSION
Basically, if data packets are lost during a transfer, then the Head-of-Line blocking phenomenon we have seen occurs (blocking of data flows until the lost packets are retransmitted).
The fact that the streams are multiplexed with HTTP/2 (i.e. one TCP connection for multiple data streams transferred simultaneously) helps to limit blocking due to packet loss. Indeed, if a packet is lost, the other simultaneous streams can continue.


Source: Cloudflare
However, because HTTP/2 relies on TCP, the risk of Head-of-Line blocking is reduced, but it is not zero. Why? Because TCP does not know what it is carrying. So, although a stream consists of a mixture of data packets from different resources, if one packet is lost, all packets from all files can be blocked, because TCP considers the file it is carrying as a whole with a missing part.
This is where UDP comes into its own with HTTP/3: in case of packet loss, QUIC knows what it is carrying, and only the flow concerned by the loss is affected , the others can continue on their way. Furthermore, with QUIC, new flows can start before the others are delivered.
Finally, the HTTP/2 header compression format HPACK is replaced by QPACK with HTTP/3, which helps simplify flow prioritization, and reduces the risk of errors and therefore blocking.
REDUCED LATENCY, AND INCREASED RELIABILITY AND SECURITY OF EXCHANGES
For HTTP/2, with HTTPS, the plaintext HTTP data is first encrypted by TLS, then transported by TCP. With HTTP/3, QUIC encapsulates TLS, which helps reduce latency as there is no TCP handshake step. QUIC thus saves a round trip compared to TLS + TCP. In the absence of TCP, the reliability features are moved above the UDP layer (TLS is still present, but encapsulated in QUIC, therefore).
The fact that all data is fully encrypted provides a higher level of security. At no time is the data transmitted in clear text. However, the downside of this security is that since intermediaries have a harder time monitoring QUIC traffic, they can block it as a precaution since they cannot inspect its content. Thus, behind some firewalls or proxies , it is impossible to access sites or services via HTTP/3. This is an inherent limitation of this protocol.
Also, QUIC encrypts each packet with TLS, while TLS+TCP can encrypt multiple packets at once. This is why QUIC is potentially heavier for high-speed scenarios, but in return, QUIC allows for faster connections.

HTTP/3 saves one exchange compared to HTTP/2 with TLS1.3 ( source: Robin Marx / Smashing Magazine )
BETTER STABILITY THANKS TO LOGIN CREDENTIALS
With TCP, a connection is defined by 4 parameters, knowing that they can change during navigation (in a mobile situation for example): client IP address, client port, server IP address, and server port.
Another new feature introduced by QUIC is that connections have a unique identifier that persists across IP address changes . This is useful for maintaining a connection when moving from a mobile network to a Wi-Fi network.
MORE FLEXIBILITY AND GREATER SCALABILITY
QUIC is more flexible than TCP on transport parameters which can be customized (we will see a concrete application to optimize loading speed a little later).
Since the data is encrypted, and the client and server endpoints do not need to be updated to keep up with developments in this layer, there are no longer any technical limitations to enriching the information contained in the data packets. They can be sent and received without constraints.
HTTP/3 and web page loading speed
We have seen that HTTP/3 helps reduce latency. And can we say that QUIC is more efficient than TCP in exploiting bandwidth? Can the problem of packet loss be permanently eradicated with HTTP/3?
Managing bandwidth and data transport with QUIC
In reality, a connection never uses 100% of the bandwidth, and since it is not possible to know in advance its total capacity (which can also vary), TCP sends packets to “test the waters” and performs what is called congestion control . That is, it sends packets until some are lost, which indicates that the maximum throughput has been reached.
Only a few dozen bytes are sent to start with, and the flow rate increases gradually (like when you turn on a tap). This is where the famous recommendation to reduce the weight of critical data to less than 14 KB comes from : the goal is to ensure that they will pass “into the pipe” as soon as “the tap is turned on” (and in the process, to favor your FCP , and that should not change with HTTP/3).
We have seen that QUIC no longer blocks streams when data packets are lost, but that does not mean that it does not do congestion control, and that there cannot be Head-of-Line blocking ! So let’s make some clarifications.
Multiplexing, QUIC and Head-of-Line blocking management
Each browser has its own multiplexing strategy to organize the transmission of data packets during a transfer. This organization has an impact on when resources can be used:

Representation of how circular and sequential multiplexing work. Generally, sequential multiplexing offers the best performance ( source: Robin Marx / Smashing Magazine )
We now know that if packets are lost, with QUIC the rest of the active flows can continue.
However, with multiplexing, if packets are lost for each stream, blocking can still occur, since neither stream will be able to continue if they each have lost packets.
The principle of eliminating Head-of-Line blocking introduced by QUIC remains very interesting, but with multiplexing, depending on the number of simultaneous flows and lost packets, its benefits can be difficult to estimate.
In any case, it is better to continue to rely on a good caching strategy , and automate it to make it more efficient and save time , in order to limit the number of data sent at the same time, and prevent critical resources from blocking rendering (since they will be in cache). Indeed, congestion control remains necessary to optimize bandwidth usage, and there is always a risk of packet loss since it is the signal that indicates that the bandwidth is being used to its maximum. On the other hand, the impact of these losses is limited thanks to UDP.
Time saved on RTTs and handshakes thanks to QUIC
We mentioned that QUIC is more scalable, and here is a concrete application: an extension can defer packet acknowledgements, for example every 10 sends instead of every 2 sends. This is a time saver, since checks and handshakes are spaced out more, and that’s a lot of milliseconds saved during connection round trips!
Additionally, since QUIC embeds TLS, sending HTTP requests and encryption are done during the same trip, further saving a round trip.
Finally, HTTP/3 still allows you to take advantage of TLS session resumption when revisiting a website – no need to wait for a new TLS handshake .
For these reasons, HTTP/3 relieves servers by limiting the number of round trips, improving performance and loading speed.
Upcoming developments with HTTP/3
In its current version, QUIC focuses on HTTP transport, relying on HTTP/2 syntax. As we pointed out at the beginning of the article, HTTP/3 is an advance that offers very interesting prospects for web performance.
- In the long term, QUIC is intended to transport other protocols than HTTP, such as DNS , SMB or SSH …
- In its standardized version, QUIC is expected to use TLS1.3 encryption , which will further improve speed by saving round trips between server and client and handshakes .
- Packet loss resilience is also being improved , experiments are underway (and you can contribute to it ).
- Thanks to connection IDs, QUIC initially allows to maintain the stability of a connection in case of IP change. In the long term, it could allow to use two networks simultaneously to benefit from an even more powerful bandwidth.
- QUIC is a fully reliable protocol, but UDP is not. QUIC could be used to send “unreliable” data , which could be useful for games, or live video streaming. For video, QUIC could also make it easier for browsers to automatically estimate and offer the best video quality for the browsing conditions.
Challenges in generalizing the implementation of HTTP/3
EXPAND UDP SUPPORT
UDP is more minimal (and flexible) than TCP: we have seen that it does not handle lost packets, and that reliability and security features are provided above, by QUIC. Thus, UDP requires more work from the software layers and processors (work it does not do), and its support in different network equipment is currently less widespread. In particular, corporate firewalls can potentially block UDP indiscriminately and thus block HTTP/3 traffic, as we mentioned earlier.
BACKWARDS COMPATIBILITY
One of the priorities of the IETF, which develops and promotes Internet standards, is to ensure backward compatibility. This is achieved by including the HTTP header that tells the client that HTTP/3 is available, and if the client does not support it, HTTP/1 and HTTP/2 versions remain available.
This is also one of the differences with HTTP/2, for which the transport is negotiated as part of the TLS handshake .
CACHING OF CONNECTION DATA
Finally, as HTTP/3 develops and may become a standard, the goal is to make it so that clients can cache data from previous HTTP/3 connections to connect directly on subsequent visits (and thus reduce the number of round trips to 0, the holy grail).
In conclusion, we have seen that some features accessible from HTTP/2, such as server push , could not produce the expected effects for web performance and display speed. Others, such as flow prioritization , are not without raising technical difficulties for implementation. This is why good practices such as concatenation, compression and minification of resources, which you automatically benefit from with Fasterize , remain relevant and essential to optimize the loading speed of your pages, even with HTTP/3!
HTTP/2 doesn’t magically halve load times, and neither does HTTP/3. To claim otherwise would be dishonest.
However, the improvements in loading speed are already clearly visible for slow networks, and the improved resilience in case of packet loss , the simultaneous use of different networks for more bandwidth, the better management of video streams… are truly exciting prospects in terms of web performance!
In short, this is an important step forward, and the webperf community is unanimous: we must jump on the bandwagon now to take advantage of the developments that are coming – and it is precisely our role to support you in this direction.
Have questions about the impact of HTTP/3 on loading speed?
Ask our experts: