Add JaxDataset benchmark variants#81
Conversation
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
Added a comparison of three marginal computation methods to
benchmarks/benchmark_marginals.py:CliqueVector.from_projectable(compiles the entire loop).JaxDataset.project(cached, compiles per clique).Also updated
src/mbi/dataset.pyto support JIT compilation by passing the staticlengthparameter tojnp.bincountinJaxDataset.project.Results indicate that JIT on
CliqueVectoris fastest for small to medium N (0.16s for N=10k vs 80s for No JIT), but fails with OOM for N=1M on standard hardware. JIT onprojectis extremely slow due to excessive recompilation overhead. No JIT provides stable scaling.PR created automatically by Jules for task 13512845416299830764 started by @ryan112358