See
|
# FIXME: remove this when ilab knows to pass batch_size=0 with llama.cpp |
|
if batch_size is None: |
|
batch_size = 0 |
We currently default to zero, but this was a temporary measure until instructlab/instructlab#1797 lands
PipelineContext(batch_size=None) will default to 8