@@ -85,10 +85,10 @@ For example, the `vectordb-uri` argument can be implemented using the `click` mo
8585
8686The embedding model used to generate text embeddings must be downloaded locally before executing the pipeline.
8787
88- For example, this command can be used to download the ` sentence-transformers/all-minilm-l6-v2 ` model to the local models cache:
88+ For example, this command can be used to download the ` ibm-granite/granite-embedding-125m-english ` model to the local models cache:
8989
9090``` bash
91- ilab model download -rp sentence-transformers/all-minilm-l6-v2
91+ ilab model download -rp ibm-granite/granite-embedding-125m-english
9292```
9393
9494If the configured embedding model has not been cached, the command execution will terminate with an error. This requirement applies
@@ -336,7 +336,7 @@ chat:
336336 retriever:
337337 top_k: 20
338338 embedder:
339- model_name: sentence-transformers/all-minilm-l6-v2
339+ model_name: ibm-granite/granite-embedding-125m-english
340340 document_store:
341341 type: milvuslite
342342 uri: embeddings.db
@@ -427,7 +427,7 @@ ilab serve --rag-embeddings --image-name=docker.io/user/my_rag_artifacts:1.0 --p
427427ilab model chat --rag --retriever-type api --retriever-uri http://localhost:8123
428428```
429429
430- [shareable-excalidraw]: https://excalidraw.com/#json=UNReLpF8DSFoe-zg4w7I8,rZjZB4ZQnpmkM084B4qjkw
430+ [shareable-excalidraw]: https://excalidraw.com/#json=P2mG25EAjeBRvqzpfIGXv,tlKJPzA2HakGygxJbmn-VQ
431431[ilab-knowledge]: https://github.com/instructlab/taxonomy?tab=readme-ov-file#getting-started-with-knowledge-contributions
432432[sdg-diff-strategy]: https://github.com/instructlab/sdg/blob/main/src/instructlab/sdg/utils/taxonomy.py
433433[chat_template]: https://github.com/instructlab/instructlab/blob/0a773f05f8f57285930df101575241c649f591ce/src/instructlab/configuration.py#L244
0 commit comments