You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ARIS ⚔️ (Auto-Research-In-Sleep) — Lightweight Markdown-only skills for autonomous ML research: cross-model review loops, idea discovery, and experiment automation. No framework, no lock-in — works with Claude Code, Codex, OpenClaw, or any LLM agent.
Research Agora: Claude Code skills, benchmarks & tools for ML researchers — paper writing, citation verification, experiment tracking, LaTeX automation
🐉 The transformer is a brilliant hack scaled past its limits. DREX is what comes next — tiered memory 🧠, sparse execution ⚡, and a learned controller that knows what to remember 💾✨
Causal analysis framework using Double Machine Learning to quantitatively isolate the effect of model size on deep learning performance while controlling for confounders such as dataset size, training time, and hyperparameters.
This is a comprehensive analysis of 5 HPO algorithms- General Algorithms (GA), Particle Swarm Optimization (PSO), (DE), PyHopper HPO, Bayesian Optimization and HyperBand Optimization (BOHB),
Julia implementation of ST-ProtoPNet for interpretable classification with support and trivial prototypes. Built on Flux.jl with custom losses, data prep tools, and 2D visualization.
A curated collection of key research papers and resources on LLMs and deep learning, organized by topic to provide a clear learning path from fundamentals to advanced techniques.