0

If I understand, QUIC exists to multiplex multiple streams over the same UDP channel, including same key exchange.

QUIC also has an unreliable transport mode for VoIP, etc. https://datatracker.ietf.org/doc/draft-pauly-quic-datagram/

Has anyone considered a “file" transfer mode for QUIC that uses either this unreliable mode or another "less" reliable mode? Would file transfer benefit much from even less ordered delivery than a QUIC stream supports?

There is a bittorrent variant µTP (BEP-29) which exists partially to interfere less with residential internet, but supports bittorrent's usual highly unordered delivery.

I suppose a file transfer protocol for QUIC could also be bittorren-like by accepting packet sized chunks from multiple senders, but that's another topic.

Jeff Burdges
  • 4,036
  • 21
  • 41
  • µTP preserves sequential delivery, its primary difference to TCP is the congestion controller. – the8472 Jun 23 '19 at 16:17
  • libswift may be of interest. it bakes a bittorrent-link protocol into the transport protocol (making it content-centric networking in a way). http://www.cs.kent.edu/~javed/class-P2P13F/papers-2013/P06-libswift-petrocco.pdf https://github.com/libswift/libswift – Arvid Jul 02 '19 at 10:41

1 Answers1

2

One advantage of unordered, unreliable file transfer protocols is that they do not need to pay the memory cost of keeping a retransmit buffer that grows with the BDP of the connection. Incorrect sizing of those buffers can lead to significant performance losses on high-BDP links.

The random-access persistent storage used for the files at each end allows the reordering and retransmission to be handled on the application level.

The absence of head-of-line blocking may also lead to marginally better IO utilization.

But those issues are edge-cases. For bulk-transferring a single large file within one continent the reliable stream mode of QUIC will probably perform near the throughput optimum.

the8472
  • 35,110
  • 4
  • 54
  • 107
  • Thanks! I've small groups of 1-5 nodes serving distinct small 1-5kb files to all 1k to 5k other nodes about every 5-10 seconds. All 1-5k nodes receive a distinct file, but they can receive it from any of the 1-5 nodes in a small group. I think thousands of ongoing connections quickly overwhelms TCP's buffers, so UDP/QUIC makes sense. Yet, small files might favor QUIC streams even more than large file. At the same time, we do technically have 1-5 senders so we'd probably develop a general purpose bittorrent-like protocol over QUIC if we believed it gave us anything. – Jeff Burdges Jun 23 '19 at 07:47
  • 1
    Ah, small file transfer does not benefit much from unordered, unreliable transfer. But it can benefit from other QUIC aspects such as the concurrent transfer/multiplexing or 0RTT session resumption as alternative to keep-alives. But handling 1 - 5K TCP connections wouldn't be a problem for modern TCP stacks and servers using event-based socket io (epoll, kqueue, io_uring etc.). It might not provide optimal latency compared to QUIC but should be able to send a few kB every few seconds per connection. – the8472 Jun 23 '19 at 15:39