Skip to content

0xDVC/go-paxos

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Go Paxos Visualization

Paxos Consensus Visualization

i built this because reading about paxos on paper wasn't cutting it. the algorithm felt too abstract. i needed to actually see it running to understand how messages flow between nodes and how consensus actually forms.

i spent weeks watching MIT's 6.824 distributed systems course on YouTube, trying to grasp the concepts. the lectures were great, but i still couldn't visualize how the algorithm actually worked in practice. so i decided to implement some of the labs myself, but i wanted to go further and add visualization.

what it does

this is a go implementation of the paxos consensus algorithm with a simple web interface that shows it running in real time. you can:

  • watch the classic two-phase protocol (prepare/promise -> propose/accept) in action
  • see messages move between nodes with animated lines
  • change the number of acceptors and see how that affects consensus
  • step through the algorithm with a detailed event log

why i built it

when i first started learning paxos, i kept running into the same problems:

  • the theory felt too abstract. i couldn't picture how messages actually flowed
  • sequence numbers were confusing until i watched them increment live
  • majority voting looked simple but was harder to reason about in practice
  • failure scenarios were almost impossible to understand without simulating them

so instead of just reading papers, i decided to build a small visualization. i kept the web side minimal (plain HTML, canvas, and vanilla JS) so the focus stays on the algorithm itself.

how it works

the implementation follows the classic two-phase paxos protocol:

phase 1: prepare

  • proposer sends prepare(n) to all acceptors
  • acceptors reply with promise(n) if n is the highest number they've seen
  • proposer waits for a majority of promises before moving on

phase 2: propose

  • proposer sends accept(n, value) to all acceptors
  • acceptors accept if they haven't promised a higher number already
  • learner confirms consensus once a majority of acceptors agree

the nodes are simulated using go's concurrency tools goroutines for each participant, channels for message passing, and select statements for handling multiple events.

setup

git clone [email protected]:0xDVC/go-paxos.git
cd go-paxos
go mod tidy
go run main.go

then open http://localhost:8080 in your browser.

project structure

go-paxos/
├── paxos/
│   └── paxos.go          # core algorithm implementation
├── server/
│   └── server.go         # http server + state management
├── web/
│   ├── index.html        # ui structure
│   ├── styles.css        # visual styling
│   └── paxos.js         # animation + state sync
├── main.go               # entry point
└── go.mod               # dependencies

technical details

the message types are pretty straightforward:

type MsgType int
const (
    MsgPrepare MsgType = iota + 1  // Phase 1
    MsgPromise                      // Phase 1 response
    MsgPropose                      // Phase 2
    MsgAccept                       // Phase 2 response
)

node roles:

  • proposer (id 100) - initiates consensus, manages phases
  • acceptors (ids 1,2,3...) - respond to prepare/propose
  • learner (id 200) - detects when consensus is reached

sequence numbers use a simple formula (seq<<4 | id) to ensure global uniqueness across proposers.

what i learned

building this taught me a few things about paxos, but i'll probably write up a proper blog post about it later. sequence numbers are everything, they prevent conflicts and ensure higher numbers win. majority logic is elegant, simple but powerful. two phases are necessary, you can't skip either one. timeouts matter, they prevent infinite waiting.

the visualization approach worked well watching consensus form step by step made the algorithm click in a way that reading about it never did.

limitations

this is a weekends project, so it has some obvious limitations:

  • single proposer only (no multi-proposer scenarios)
  • no failures (all nodes are reliable)
  • fixed timeouts (hardcoded delays for visualization)
  • simple network simulation with channels

the visualization is polling-based (websockets would be better) and the canvas sizing is fixed.

future ideas

if i come back to this, i'd want to add:

  • multi-paxos for handling multiple consensus rounds
  • failure simulation (drop messages, crash nodes)
  • leader election with dynamic proposer selection
  • websocket updates instead of polling

but honestly, it served its purpose. now i actually get how this stuff works instead of just reading about it.

built over several weekends after watching MIT's 6.824 distributed systems course. the visualization made all the difference.

About

go implementation of paxos

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published