You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Aug 14, 2025. It is now read-only.
fix: model register missing model-type and not accepting metadata (#11)
# What does this PR do?
Added the ability to specify the model type when registering models.
Also fixed a bug with passing metadata which would result in the
following error:
```
Error Type: BadRequestError │
│ Details: Error code: 400 - {'error': {'detail': {'errors': [{'loc': ['body', 'metadata'], 'msg': 'Input should be a valid dictionary', 'type': 'dict_type'}]}}}
```
Closes: #215
## Test Plan
Run the following commands
```
# Note a Llama Stack Server must be running
# Create a venv
uv sync --python 3.12
# Install the LSC with the new code changes
uv pip install -e .
# List the available models
llama-stack-client models list
# Register the granite-embedding-30m embedding model NOTE must have sentence-transformers as an inference provider
llama-stack-client models register granite-embedding-30m --provider-id "sentence-transformers" --provider-model-id ibm-granite/granite-embedding-30m-english --metadata '{"embedding_dimension": 384}' --model-type embedding
# Verify the embedding model added are present
llama-stack-client models list
```
0 commit comments