Skip to content

Commit 9905bbf

Browse files
authored
Merge pull request #4 from n0-computer/rae/docs-review2
Rae/docs review2
2 parents f1c174b + cadcbf9 commit 9905bbf

File tree

6 files changed

+88
-76
lines changed

6 files changed

+88
-76
lines changed

about/roadmap.mdx

Lines changed: 0 additions & 31 deletions
This file was deleted.

concepts/relays.mdx

Lines changed: 35 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -2,22 +2,41 @@
22
title: "Relays"
33
---
44

5-
Relays are servers that help establish connections between devices.
6-
7-
Relays temporarily route encrypted traffic until a direct, P2P connection is
5+
Relays are servers that temporarily route encrypted traffic until a direct, P2P connection is
86
feasible. Once this direct path is set up, the relay server steps back, and the
97
data flows directly between devices. This approach allows Iroh to maintain a
108
secure, low-latency connection, even in challenging network situations.
119

12-
Relays are also open source! You can run your own relay server, or use one of
13-
the public relays. Code is
14-
[here](https://github.com/n0-computer/iroh/tree/main/iroh-relay), and we build
15-
relay binaries for most platforms with each iroh
16-
[release](https://github.com/n0-computer/iroh/releases)
10+
There are situations where a direct connection _can't_ be established, and in
11+
those cases traffic falls back to running through the relay. Relay servers **do
12+
not** have access to the data being transmitted, as it's encrypted end-to-end.
13+
14+
We're working on formally collecting the direct connection rate from production
15+
iroh networks. Anecdotal evidence points to the holepunching rate being around
16+
90%, meaning 9 out of 10 times, a direct connection is established.
17+
18+
## Public relays
19+
20+
iroh is configured with a set of public relays provided by [The n0
21+
team](https://n0.computer) that are free to use. The public relays rate-limit
22+
traffic that flows through the relay. This is to prevent abuse, and ensure the
23+
relays are available to everyone. There are no guarantees around uptime or
24+
performance when using the public relays.
1725

18-
There are situations where a direct connection _can't_ be established, and in those cases traffic falls back to running through the relay. Relay servers **do not** have access to the data being transmitted, as it's encrypted end-to-end.
26+
We recommend using the public relays for development and testing, as they are
27+
free to use and require no setup. However, for production systems, we recommend
28+
using dedicated relays instead.
29+
30+
## Dedicated relays
31+
32+
For production use, we recommend using dedicated relays. Dedicated relays are relay
33+
servers that are either self-hosted or provided as a managed service. Dedicated
34+
relays provide better performance, security, and uptime guarantees compared to
35+
the public relays.
36+
37+
Relay code is open source! You can run your own relay server, or [pick a hosting
38+
provider](https://n0des.iroh.computer).
1939

20-
We're working on formally collecting the direct connection rate from production iroh networks. Anecdotal evidence points to the holepunching rate being around 90%, meaning 9 out of 10 times, a direct connection is established.
2140

2241
### Why this architecture is powerful
2342

@@ -53,12 +72,10 @@ to relayed, or even a mixed combination of the two. Iroh will automatically
5372
switch between direct and relayed connections as needed, without any action
5473
required from the application.
5574

56-
## number 0 public relays
75+
## Read more
76+
77+
- [Dedicated infrastructure guide](/deployment/dedicated-infrastructure)
78+
- [Relay source code](https://github.com/n0-computer/iroh/tree/main/iroh-relay)
79+
- [Relay binary releases](https://github.com/n0-computer/iroh/releases)
80+
- [Managed relay service](https://n0des.iroh.computer)
5781

58-
number 0 provides a set of public relays that are free to use, and are
59-
configured by default. You're more than welcome to run production systems using
60-
the public relays if you find performance acceptable. The public relays do
61-
rate-limit traffic that flows through the relay. This is to prevent abuse, and
62-
ensure the relays are available to everyone. If you need more capacity, you can
63-
run your own relay server, or [contact us about a custom relay
64-
setup](https://n0.computer/n0ps/).

connecting/dns-discovery.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,8 +34,8 @@ async fn main() -> anyhow::Result<()> {
3434

3535
println!("endpoint id: {:?}", endpoint.id());
3636

37-
let ticket = EndpointTicket::new(endpoint.addr());
38-
println!("Share this ticket to let others connect to your endpoint: {ticket}");
37+
let ticket = EndpointTicket::new(endpoint.addr());
38+
println!("Share this ticket to let others connect to your endpoint: {ticket}");
3939

4040
Ok(())
4141
}

deployment/dedicated-infrastructure.mdx

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -17,9 +17,18 @@ We recommend using the public relays for development and testing, as they are
1717
free to use and require no setup. However, for production systems, we recommend
1818
using dedicated relays instead.
1919

20+
## Using dedicated relays
21+
22+
To use dedicated relays with your iroh endpoint, you need to configure the
23+
endpoint to use your relay's URL.
24+
25+
For detailed information on configuring custom relays, including code examples
26+
and API documentation, see the [iroh relay configuration
27+
guide](https://n0des.iroh.computer/docs/relays/managed).
28+
2029
## Why use dedicated relays in production?
2130

22-
Using a dedicated relays can provide several benefits, including improved performance,
31+
Using dedicated relays can provide several benefits, including improved performance,
2332
enhanced security, better uptime guarantees, and greater control over your network infrastructure. By
2433
using your own servers, you can optimize connection speeds and reduce
2534
latency for your specific use case.
@@ -64,12 +73,3 @@ For even better guarantees, you can distribute relays across multiple cloud prov
6473
Adding capacity means spinning up more lightweight relay instances, not provisioning databases or managing complex stateful server infrastructure. You can easily scale up for peak usage and scale down during quiet periods.
6574

6675
This architecture inverts the traditional model: instead of treating servers as precious stateful resources and clients as disposable, relay-based architectures treat relays as disposable connection facilitators while clients own the application state and logic.
67-
68-
## Using dedicated relays
69-
70-
To use dedicated relays with your iroh endpoint, you need to configure the
71-
endpoint to use your relay's URL.
72-
73-
For detailed information on configuring custom relays, including code examples
74-
and API documentation, see the [iroh relay configuration
75-
guide](https://n0des.iroh.computer/docs/relays/managed).

protocols/using-quic.md

Lines changed: 37 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -200,7 +200,7 @@ async fn request(conn: &Connection, request: &[u8]) -> Result<Vec<u8>> {
200200

201201
let response = recv.read_to_end(MAX_RESPONSE_SIZE).await?;
202202

203-
Ok(response);
203+
Ok(response)
204204
}
205205

206206
// The accepting endpoint will call this to handle all
@@ -229,7 +229,7 @@ async fn handle_request(mut send: SendStream, mut recv: RecvStream) -> Result<()
229229
}
230230
```
231231

232-
Please not that, in this case, the client doesn't immediately close the connection after a single request (duh!). Instead, it might want to optimistically keep the connection open for some idle time or until it knows the application won't need to make another request, and only then close the connection. All that said, it's still true that **the connecting side closes the connection**.
232+
Please note that, in this case, the client doesn't immediately close the connection after a single request (duh!). Instead, it might want to optimistically keep the connection open for some idle time or until it knows the application won't need to make another request, and only then close the connection. All that said, it's still true that **the connecting side closes the connection**.
233233

234234
## Multiple ordered Notifications
235235

@@ -333,7 +333,7 @@ In that case, we need to break up the single response stream into multiple respo
333333
We can do this by "conceptually" splitting the "single" bi-directional stream into one uni-directional stream for the request and multiple uni-directional streams in the other direction for all the responses:
334334

335335
```rs
336-
async fn connnecting_side(conn: Connection, request: &[u8]) -> Result<()> {
336+
async fn connecting_side(conn: Connection, request: &[u8]) -> Result<()> {
337337
let mut send = conn.open_uni().await?;
338338
send.write_all(request).await?;
339339
send.finish()?;
@@ -362,9 +362,6 @@ Another thing that might or might not be important for your use case is knowing
362362
You can either introduce another message type that is interpreted as a finishing token, but there's another elegant way of solving this. Instead of only opening a uni-directional stream for the request, you open a bi-directional one. The response stream will only be used to indicate the final response stream ID. It then acts as a sort of "control stream" to provide auxiliary information about the request for the connecting endpoint.
363363
</Note>
364364

365-
## Proxying UDP traffic using the unreliable datagram extension
366-
367-
368365
## Time-sensitive Real-time interaction
369366

370367
We often see users reaching for the QUIC datagram extension when implementing real-time protocols. Doing this is in most cases misguided.
@@ -377,9 +374,6 @@ A real-world example is the media over QUIC protocol (MoQ in short): MoQ is used
377374
The receiver then stops streams that are "too old" to be delivered, e.g. because it's a live video stream and newer frames were already fully received.
378375
Similarly, the sending side will also reset older streams for the application level to indicate to the QUIC stack it doesn't need to keep re-trying the transmission of an outdated live video frame. (MoQ will actually also use stream prioritization to make sure the newest video frames get scheduled to be sent first.)
379376

380-
https://discord.com/channels/1161119546170687619/1195362941134962748/1407266901939327007
381-
https://discord.com/channels/976380008299917365/1063547094863978677/1248723504246030336
382-
383377
## Closing Connections
384378

385379
Gracefully closing connections can be tricky to get right when first working with the QUIC API.
@@ -428,7 +422,40 @@ And again, after `handle_connection` we need to make sure to wait for `Endpoint:
428422

429423
## Aborting Streams
430424

431-
https://discord.com/channels/949724860232392765/1399719019292000329/1399721482522984502
425+
Sometimes you need to abandon a stream before it completes - either because the data has become stale, or because you've decided you no longer need it. QUIC provides mechanisms for both the sender and receiver to abort streams gracefully.
426+
427+
### When to abort streams
428+
429+
A real-world example comes from Media over QUIC (MoQ), which streams live video frames. Consider this scenario:
430+
431+
- Each video frame is sent on its own uni-directional stream
432+
- Frames arrive out of order due to network conditions
433+
- By the time an old frame finishes transmitting, newer frames have already been received
434+
- Continuing to receive the old frame wastes bandwidth and processing time
435+
436+
### How to abort streams
437+
438+
**From the sender side:**
439+
440+
Use `SendStream::reset(error_code)` to immediately stop sending data and discard any buffered bytes. This tells QUIC to stop retrying lost packets for this stream.
441+
442+
443+
**From the receiver side:**
444+
445+
Use `RecvStream::stop(error_code)` to tell the sender you're no longer interested in the data. This allows the sender's QUIC stack to stop retransmitting lost packets.
446+
447+
448+
### Key insights
449+
450+
1. **Stream IDs indicate order**: QUIC stream IDs are monotonically increasing. You can compare stream IDs to determine which streams are newer without relying on application-level sequencing.
451+
452+
2. **Both sides can abort**: Either the sender (via `reset`) or receiver (via `stop`) can abort a stream. Whichever side detects the data is no longer needed first should initiate the abort.
453+
454+
3. **QUIC stops retransmissions**: When a stream is reset or stopped, QUIC immediately stops trying to recover lost packets for that stream, saving bandwidth and processing time.
455+
456+
4. **Streams are cheap**: Opening a new stream is very fast (no round-trips required), so it's perfectly fine to open one stream per video frame, message, or other small unit of data.
457+
458+
This pattern of using many short-lived streams that can be individually aborted is one of QUIC's most powerful features for real-time applications. It gives you fine-grained control over what data is worth transmitting, without the head-of-line blocking issues that would occur with a single TCP connection.
432459

433460
## QUIC 0-RTT features
434461

quickstart.mdx

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ use iroh_ping::Ping;
8888
async fn main() -> anyhow::Result<()> {
8989
// Create an endpoint, it allows creating and accepting
9090
// connections in the iroh p2p world
91-
let endpoint = Endpoint::builder().bind().await?;
91+
let endpoint = Endpoint::bind().await?;
9292

9393
// bring the endpoint online before accepting connections
9494
endpoint.online().await;
@@ -107,7 +107,6 @@ async fn main() -> anyhow::Result<()> {
107107
}
108108
```
109109

110-
With these two lines, we've initialized iroh-blobs and gave it access to our `Endpoint`.
111110

112111
### Ping: Send
113112
At this point what we want to do depends on whether we want to accept incoming iroh connections from the network or create outbound iroh connections to other endpoints.
@@ -124,7 +123,7 @@ use iroh_ping::Ping;
124123
async fn main() -> Result<()> {
125124
// Create an endpoint, it allows creating and accepting
126125
// connections in the iroh p2p world
127-
let endpoint = Endpoint::builder().bind().await?;
126+
let endpoint = Endpoint::bind().await?;
128127

129128
// Then we initialize a struct that can accept ping requests over iroh connections
130129
let ping = Ping::new();
@@ -138,7 +137,7 @@ async fn main() -> Result<()> {
138137
let addr = recv_router.endpoint().addr();
139138

140139
// create a send side & send a ping
141-
let send_ep = Endpoint::builder().bind().await?;
140+
let send_ep = Endpoint::bind().await?;
142141
let send_pinger = Ping::new();
143142
let rtt = send_pinger.ping(&send_ep, addr).await?;
144143

@@ -180,7 +179,7 @@ relay URL, and direct addresses. An address is a structured representation of
180179
this information that can be consumed by iroh endpoints to dial each other.
181180

182181
An `EndpointTicket` wraps this address into a serializable format -- a short
183-
string you can copy and paste. Share this sting with senders so they can dial
182+
string you can copy and paste. Share this string with senders so they can dial
184183
the receiver without manually exchanging networking details.
185184

186185
This out of band information must be sent between the endpoints so that they can

0 commit comments

Comments
 (0)