Skip to content

IamTarunG/linux-network-optimization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Linux Network Buffer Experiment

This project tests how Linux kernel network settings (sysctl) affect latency when sending data using Rust. I wanted to see if changing the socket buffer sizes makes a difference for high-speed data transfers.

📊 Results

I compared two configurations using my Python script to see how they handle a 128KB payload:

  1. Restricted: Small 64KB buffers.
  2. Optimized: Large 16MB buffers.

Network Results

Key Findings:

  • Lower Latency Baseline: The optimized settings (green line) stay consistently lower than the restricted settings (red line).
  • Fewer Spikes: With small 64KB buffers, the system frequently hits latency spikes. This happens because the buffer is too small for the 128KB payload, causing the kernel to stall and wait for acknowledgments.
  • Result: Increasing the Linux TCP buffer limits reduced the baseline latency by ~330 ms (~31%) for the 128KB workload.

🛠️ How it works

  • Rust Profiler: A tool I built using tokio that sends data bursts and measures the exact time taken for each request.
  • Python Orchestrator: A script that uses subprocess to change the sysctl settings, runs the Rust binary, and plots the graph.

The sysctl settings used:

The script modifies the kernel's read/write memory limits. Here is the logic used in experiment.py:

# High-performance settings for the 16MB test
"net.core.rmem_max": 16777216,
"net.core.wmem_max": 16777216,
"net.ipv4.tcp_rmem": "4096 87380 16777216",
"net.ipv4.tcp_wmem": "4096 65536 16777216",
"net.ipv4.tcp_window_scaling": 1

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors