Skip to content

Conversation

@Angelmmiguel
Copy link
Contributor

Force the send_tcp_chunk method to send the data immediately. It avoids waiting for some timeouts when packets are small.

Refs #6

@gregnr
Copy link
Member

gregnr commented Apr 25, 2025

Thanks @Angelmmiguel 😃 happy to merge this. I just finished adding CI tests to the repo - would you mind merging/rebasing on the latest main so that CI can run tests on your PR?

Also, not a blocker to merge this, but if we can somehow reproduce the previous delayed behaviour in a test to prove that this fixes it, that would be excellent.

@Angelmmiguel
Copy link
Contributor Author

Angelmmiguel commented Apr 26, 2025

Thanks @Angelmmiguel 😃 happy to merge this. I just finished adding CI tests to the repo - would you mind merging/rebasing on the latest main so that CI can run tests on your PR?

Sure! I will rebase the changes.

Also, not a blocker to merge this, but if we can somehow reproduce the previous delayed behaviour in a test to prove that this fixes it, that would be excellent.

Let me try to create a test for this behavior 👍

@Angelmmiguel
Copy link
Contributor Author

@gregnr I added a test to reproduce the issue. If you can enable the workflow, I will uncomment the solution in a separate commit to confirm it fixes the delay.

Thanks!

@gregnr
Copy link
Member

gregnr commented Apr 26, 2025

@Angelmmiguel nice, looks like they're failing without the fix 🔥 Feel free to uncomment!

@Angelmmiguel
Copy link
Contributor Author

@Angelmmiguel nice, looks like they're failing without the fix 🔥 Feel free to uncomment!

Amazing! I've just applied the fix again 😄

@Angelmmiguel
Copy link
Contributor Author

Angelmmiguel commented Apr 26, 2025

Interesting, it seems 100 is a value too low for the CI. All the tests are passing locally, but on the CI some of the browser tests are failing:

⎯⎯⎯⎯⎯⎯⎯ Failed Tests 2 ⎯⎯⎯⎯⎯⎯⎯

 FAIL   node  src/stack.test.ts > tcp > tcp packets are sent immediately without delay
AssertionError: expected 100.12071800000012 to be less than 100
 ❯ src/stack.test.ts:882:21
    880|     // Without tcp_output, there would be a significant delay (~300ms)
    881|     const elapsed = endTime - startTime;
    882|     expect(elapsed).toBeLessThan(100);
       |                     ^
    883|   });
    884| });

⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯[1/2]⎯

 FAIL   browser (webkit)  src/stack.test.ts > tcp > tcp packets are sent immediately without delay
AssertionError: expected 100 to be less than 100
 ❯ src/stack.test.ts:882:33
    880|     // Without tcp_output, there would be a significant delay (~300ms)
    881|     const elapsed = endTime - startTime;
    882|     expect(elapsed).toBeLessThan(100);
       |                                 ^
    883|   });
    884| });

Let me increase the threshold for testing on the CI.

@Angelmmiguel
Copy link
Contributor Author

It worked! 😄

@gregnr
Copy link
Member

gregnr commented Apr 26, 2025

The fact that they were all around 100ms makes me suspicious that this time is limited by our lwIP interval:

setInterval(() => {
wasmInstance.exports.process_queued_packets();
wasmInstance.exports.process_timeouts();
}, 100)

This actually makes sense, since I see you are using the default loopback interface in that test for the TCP connection.

For context, lwIP will queue loopback packets instead of invoking their callback immediately due to reentrancy issues. They require you to regularly process these queued packets which we do above via process_queued_packets(). So on the loopback interface, after sending a TCP packet, you'd have to wait up until the above interval before you'll receive them. TBH I haven't determined what the best time should be for this interval - too long and you're waiting forever for your loopback packets, too short and you're creating excessive CPU usage.

To make this test more accurate though, we can use a tun interface instead and send the packets between 2 stacks:

test('tcp packets are sent immediately without delay', async () => {
  const stack1 = await createStack();
  const stack2 = await createStack();

  const tun1 = await stack1.createTunInterface({
    ip: '192.168.1.1/24',
  });

  const tun2 = await stack2.createTunInterface({
    ip: '192.168.1.2/24',
  });

  // Connect the two interfaces
  tun1.readable.pipeTo(tun2.writable);
  tun2.readable.pipeTo(tun1.writable);

  const listener = await stack2.listenTcp({
    port: 8080,
  });

  const [outbound, inbound] = await Promise.all([
    stack1.connectTcp({
      host: '192.168.1.2',
      port: 8080,
    }),
    nextValue(listener),
  ]);

  const data = new Uint8Array([0x01, 0x02, 0x03, 0x04]);

  const inboundReader = inbound.readable.getReader();
  const outboundWriter = outbound.writable.getWriter();

  // Record start time
  const startTime = performance.now();

  // Write and read immediately
  await outboundWriter.write(data);
  const received = await inboundReader.read();

  // Record end time
  const endTime = performance.now();

  expect(received.value).toStrictEqual(data);

  // Check timing - with tcp_output enabled, this should be very fast
  // Without tcp_output, there would be a significant delay (~300ms)
  const elapsed = endTime - startTime;
  expect(elapsed).toBeCloseTo(0, -1);
});

This should avoid the need to wait on the interval since tun interfaces don't queue packets.

@Angelmmiguel
Copy link
Contributor Author

For context, lwIP will queue loopback packets instead of invoking their callback immediately due to reentrancy issues. They require you to regularly process these queued packets which we do above via process_queued_packets(). So on the loopback interface, after sending a TCP packet, you'd have to wait up until the above interval before you'll receive them. TBH I haven't determined what the best time should be for this interval - too long and you're waiting forever for your loopback packets, too short and you're creating excessive CPU usage.

This is quite interesting, thank you for the context. Maybe it could be a configuration parameter for the createStack method. Some use cases might prioritize one behavior over the other.

To make this test more accurate though, we can use a tun interface instead and send the packets between 2 stacks.

That makes perfect sense. I updated the tests and confirmed they fail without the change I introduced:

   × tcp > tcp packets are sent immediately without delay 324ms
     → expected 297.23817899999995 to be close to +0, received difference is 297.23817899999995, but expected 5
   ...

⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ Failed Tests 1 ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯

 FAIL   node  src/stack.test.ts > tcp > tcp packets are sent immediately without delay
AssertionError: expected 297.23817899999995 to be close to +0, received difference is 297.23817899999995, but expected 5
 ❯ src/stack.test.ts:894:21
    892|     // Without tcp_output, there would be a significant delay (~300ms)
    893|     const elapsed = endTime - startTime;
    894|     expect(elapsed).toBeCloseTo(0, -1);
       |                     ^
    895|   });
    896| });

@gregnr gregnr merged commit a99b8b0 into chipmk:main Apr 27, 2025
1 check passed
@gregnr
Copy link
Member

gregnr commented Apr 27, 2025

Thanks again for this @Angelmmiguel + including a test. Great work! This is now released on v0.3.3.

@Angelmmiguel
Copy link
Contributor Author

Thanks again for this @Angelmmiguel + including a test. Great work! This is now released on v0.3.3.

Amazing!! Thank you for the fast review and release 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants