Skip to content

vianarafael/codechrono

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CodeChrono helps you estimate how long software tasks will take — based on how long you actually take to build them.

A local LLM-powered dev session logger that watches your terminal, code changes, and app focus — then summarizes what you worked on.

Built to help you estimate how long real work takes—based on your own history, not guesswork.


🚀 Features

  • ✅ Tracks terminal commands and app focus
  • ✅ Summarizes git diffs
  • ✅ Uses a local LLM (via Ollama) to generate summaries
  • ✅ Stores everything locally in SQLite
  • Estimates time to complete new tasks based on your real history
  • ✅ Fully offline, no tracking

📊 Your Dev Work — Visualized

See what you worked on, how long it took, and whether you’re speeding up — all in a clean local dashboard.

CodeChrono Dashboard Screenshot


📦 Requirements

  • Python 3.8+
  • Ollama installed and running a model (e.g. qwen3)
  • Shell that supports PROMPT_COMMAND (bash, zsh)
  • App focus tracking (optional):
    • xdotool for Linux
    • osascript for macOS (basic support via AppleScript)
    • NirCmd or custom PowerShell for Windows

📥 Setup

  1. Clone this repo

  2. Install Python dependencies

    pip install -r requirements.txt
  3. Set up terminal logging and run ollama

    bash scripts/setup_terminal_logger.sh
    source ~/.bashrc   # or source ~/.zshrc
    
    ollama run qwen3:14b-q4_K_M  # change MODEL_NAME in narrator/llm.py - since I have logic to strip <think>...

🛠 Usage

python run.py start -m "refactor login flow"

python run.py stop

# view recent summaries
python run.py report

🧪 Example Output

## Summary (2h session)
- Fixed bug in `auth.py` handling token expiration
- Ran tests and confirmed fix
- Researched error via Stack Overflow

🖥 Launch the Dashboard

To visualize your session history and track your development speed over time, run:

streamlit run dashboard.py

Once it starts, open http://localhost:8501 in your browser.

The dashboard shows:

  • ⏱️ Time spent per session
  • ⚡ Your fastest vs slowest tasks
  • 📉 Whether you're getting faster or slower
  • 🧱 A breakdown of features you've built

💡 Tip: Make sure you’ve logged at least one session before launching the dashboard.

🔮 Estimate time for a new feature

python run.py estimate -m "build settings page for admin panel"

🧪 Example Output

🧮 Estimated Time: 2–3 hours.
This task is similar to your previous “settings UI” session (3h), but may go faster based on recency.

🏎 How CodeChrono Makes You Faster

CodeChrono doesn’t just track what you did — it helps you build speed through self-awareness.

Know Your Real Benchmarks
Stop guessing how long something “should” take. See how long you actually took — and plan accordingly.

🔁 Catch Time Sinks
Spot patterns in what slows you down (auth flows? test setups?) so you can simplify or automate them next time.

🧠 Improve Through Feedback
Use summaries + durations as a personal feedback loop. Reflect, adjust, and optimize how you work.

🔮 Estimate With Confidence
Replace hesitation with history-backed estimates. No more overbooking or under-planning your dev time.


🤖 Ask the LLM Questions

CodeChrono isn’t just for tracking — it can answer smart questions based on your past dev sessions.

You're already storing:

  • Timestamps
  • Descriptions
  • Summaries
  • Durations

So you can build a new CLI command like:

python run.py query -m "What features took the longest in the last month?"

Behind the scenes, this:

  1. Pulls relevant session summaries from SQLite
  2. Sends them to the LLM with a prompt like:
Here's a list of my dev sessions from the past month. Please analyze:
- Which tasks took the longest?
- Are there any patterns or inefficiencies?
- What types of work am I fastest at?

Give recommendations if possible.

This turns CodeChrono into a local dev analyst — not just a logger.

⚙️ Model Compatibility CodeChrono is designed and tested with the qwen3:14b-q4_K_M model.

Qwen models often include reasoning blocks like ..., which CodeChrono strips automatically. If you're using a different model (e.g. mistral, llama2, etc.), those tags may not appear — or the output format may change entirely.

To adjust for this, you can update your llm.py like so:

if "qwen" in MODEL_NAME:
    response = re.sub(r"<think>.*?</think>", "", response, flags=re.DOTALL)

⚡️ Make It Frictionless

If you're like me, you don't want to remember commands or activate virtual environments every time you build. Here's how to make CodeChrono always ready:

Add these to your .bashrc or .zshrc:

alias tcstart='python ~/<path-to-project>/codechrono/run.py start -m'
alias tcstop='python ~/<path-to-project>/codechrono/run.py stop'
alias tcreport='python ~/<path-to-project>/codechrono/run.py report'
alias tcest='python ~/<path-to-project>/codechrono/run.py estimate -m'

Then just type:

tcstart "build auth"

About

helps you estimate how long software tasks will take

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published