This project tests how Linux kernel network settings (sysctl) affect latency when sending data using Rust. I wanted to see if changing the socket buffer sizes makes a difference for high-speed data transfers.
I compared two configurations using my Python script to see how they handle a 128KB payload:
- Restricted: Small 64KB buffers.
- Optimized: Large 16MB buffers.
- Lower Latency Baseline: The optimized settings (green line) stay consistently lower than the restricted settings (red line).
- Fewer Spikes: With small 64KB buffers, the system frequently hits latency spikes. This happens because the buffer is too small for the 128KB payload, causing the kernel to stall and wait for acknowledgments.
- Result: Increasing the Linux TCP buffer limits reduced the baseline latency by ~330 ms (~31%) for the 128KB workload.
- Rust Profiler: A tool I built using
tokiothat sends data bursts and measures the exact time taken for each request. - Python Orchestrator: A script that uses
subprocessto change thesysctlsettings, runs the Rust binary, and plots the graph.
The script modifies the kernel's read/write memory limits. Here is the logic used in experiment.py:
# High-performance settings for the 16MB test
"net.core.rmem_max": 16777216,
"net.core.wmem_max": 16777216,
"net.ipv4.tcp_rmem": "4096 87380 16777216",
"net.ipv4.tcp_wmem": "4096 65536 16777216",
"net.ipv4.tcp_window_scaling": 1