You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are situations where a direct connection _can't_ be established, and in
11
+
those cases traffic falls back to running through the relay. Relay servers **do
12
+
not** have access to the data being transmitted, as it's encrypted end-to-end.
13
+
14
+
We're working on formally collecting the direct connection rate from production
15
+
iroh networks. Anecdotal evidence points to the holepunching rate being around
16
+
90%, meaning 9 out of 10 times, a direct connection is established.
17
+
18
+
## Public relays
19
+
20
+
iroh is configured with a set of public relays provided by [The n0
21
+
team](https://n0.computer) that are free to use. The public relays rate-limit
22
+
traffic that flows through the relay. This is to prevent abuse, and ensure the
23
+
relays are available to everyone. There are no guarantees around uptime or
24
+
performance when using the public relays.
17
25
18
-
There are situations where a direct connection _can't_ be established, and in those cases traffic falls back to running through the relay. Relay servers **do not** have access to the data being transmitted, as it's encrypted end-to-end.
26
+
We recommend using the public relays for development and testing, as they are
27
+
free to use and require no setup. However, for production systems, we recommend
28
+
using dedicated relays instead.
29
+
30
+
## Dedicated relays
31
+
32
+
For production use, we recommend using dedicated relays. Dedicated relays are relay
33
+
servers that are either self-hosted or provided as a managed service. Dedicated
34
+
relays provide better performance, security, and uptime guarantees compared to
35
+
the public relays.
36
+
37
+
Relay code is open source! You can run your own relay server, or [pick a hosting
38
+
provider](https://n0des.iroh.computer).
19
39
20
-
We're working on formally collecting the direct connection rate from production iroh networks. Anecdotal evidence points to the holepunching rate being around 90%, meaning 9 out of 10 times, a direct connection is established.
21
40
22
41
### Why this architecture is powerful
23
42
@@ -53,12 +72,10 @@ to relayed, or even a mixed combination of the two. Iroh will automatically
53
72
switch between direct and relayed connections as needed, without any action
Using a dedicated relays can provide several benefits, including improved performance,
31
+
Using dedicated relays can provide several benefits, including improved performance,
23
32
enhanced security, better uptime guarantees, and greater control over your network infrastructure. By
24
33
using your own servers, you can optimize connection speeds and reduce
25
34
latency for your specific use case.
@@ -64,12 +73,3 @@ For even better guarantees, you can distribute relays across multiple cloud prov
64
73
Adding capacity means spinning up more lightweight relay instances, not provisioning databases or managing complex stateful server infrastructure. You can easily scale up for peak usage and scale down during quiet periods.
65
74
66
75
This architecture inverts the traditional model: instead of treating servers as precious stateful resources and clients as disposable, relay-based architectures treat relays as disposable connection facilitators while clients own the application state and logic.
67
-
68
-
## Using dedicated relays
69
-
70
-
To use dedicated relays with your iroh endpoint, you need to configure the
71
-
endpoint to use your relay's URL.
72
-
73
-
For detailed information on configuring custom relays, including code examples
74
-
and API documentation, see the [iroh relay configuration
Please not that, in this case, the client doesn't immediately close the connection after a single request (duh!). Instead, it might want to optimistically keep the connection open for some idle time or until it knows the application won't need to make another request, and only then close the connection. All that said, it's still true that **the connecting side closes the connection**.
232
+
Please note that, in this case, the client doesn't immediately close the connection after a single request (duh!). Instead, it might want to optimistically keep the connection open for some idle time or until it knows the application won't need to make another request, and only then close the connection. All that said, it's still true that **the connecting side closes the connection**.
233
233
234
234
## Multiple ordered Notifications
235
235
@@ -333,7 +333,7 @@ In that case, we need to break up the single response stream into multiple respo
333
333
We can do this by "conceptually" splitting the "single" bi-directional stream into one uni-directional stream for the request and multiple uni-directional streams in the other direction for all the responses:
@@ -362,9 +362,6 @@ Another thing that might or might not be important for your use case is knowing
362
362
You can either introduce another message type that is interpreted as a finishing token, but there's another elegant way of solving this. Instead of only opening a uni-directional stream for the request, you open a bi-directional one. The response stream will only be used to indicate the final response stream ID. It then acts as a sort of "control stream" to provide auxiliary information about the request for the connecting endpoint.
363
363
</Note>
364
364
365
-
## Proxying UDP traffic using the unreliable datagram extension
366
-
367
-
368
365
## Time-sensitive Real-time interaction
369
366
370
367
We often see users reaching for the QUIC datagram extension when implementing real-time protocols. Doing this is in most cases misguided.
@@ -377,9 +374,6 @@ A real-world example is the media over QUIC protocol (MoQ in short): MoQ is used
377
374
The receiver then stops streams that are "too old" to be delivered, e.g. because it's a live video stream and newer frames were already fully received.
378
375
Similarly, the sending side will also reset older streams for the application level to indicate to the QUIC stack it doesn't need to keep re-trying the transmission of an outdated live video frame. (MoQ will actually also use stream prioritization to make sure the newest video frames get scheduled to be sent first.)
Sometimes you need to abandon a stream before it completes - either because the data has become stale, or because you've decided you no longer need it. QUIC provides mechanisms for both the sender and receiver to abort streams gracefully.
426
+
427
+
### When to abort streams
428
+
429
+
A real-world example comes from Media over QUIC (MoQ), which streams live video frames. Consider this scenario:
430
+
431
+
- Each video frame is sent on its own uni-directional stream
432
+
- Frames arrive out of order due to network conditions
433
+
- By the time an old frame finishes transmitting, newer frames have already been received
434
+
- Continuing to receive the old frame wastes bandwidth and processing time
435
+
436
+
### How to abort streams
437
+
438
+
**From the sender side:**
439
+
440
+
Use `SendStream::reset(error_code)` to immediately stop sending data and discard any buffered bytes. This tells QUIC to stop retrying lost packets for this stream.
441
+
442
+
443
+
**From the receiver side:**
444
+
445
+
Use `RecvStream::stop(error_code)` to tell the sender you're no longer interested in the data. This allows the sender's QUIC stack to stop retransmitting lost packets.
446
+
447
+
448
+
### Key insights
449
+
450
+
1.**Stream IDs indicate order**: QUIC stream IDs are monotonically increasing. You can compare stream IDs to determine which streams are newer without relying on application-level sequencing.
451
+
452
+
2.**Both sides can abort**: Either the sender (via `reset`) or receiver (via `stop`) can abort a stream. Whichever side detects the data is no longer needed first should initiate the abort.
453
+
454
+
3.**QUIC stops retransmissions**: When a stream is reset or stopped, QUIC immediately stops trying to recover lost packets for that stream, saving bandwidth and processing time.
455
+
456
+
4.**Streams are cheap**: Opening a new stream is very fast (no round-trips required), so it's perfectly fine to open one stream per video frame, message, or other small unit of data.
457
+
458
+
This pattern of using many short-lived streams that can be individually aborted is one of QUIC's most powerful features for real-time applications. It gives you fine-grained control over what data is worth transmitting, without the head-of-line blocking issues that would occur with a single TCP connection.
With these two lines, we've initialized iroh-blobs and gave it access to our `Endpoint`.
111
110
112
111
### Ping: Send
113
112
At this point what we want to do depends on whether we want to accept incoming iroh connections from the network or create outbound iroh connections to other endpoints.
@@ -124,7 +123,7 @@ use iroh_ping::Ping;
124
123
asyncfnmain() ->Result<()> {
125
124
// Create an endpoint, it allows creating and accepting
126
125
// connections in the iroh p2p world
127
-
letendpoint=Endpoint::builder().bind().await?;
126
+
letendpoint=Endpoint::bind().await?;
128
127
129
128
// Then we initialize a struct that can accept ping requests over iroh connections
0 commit comments