This is a fork of kserve that serves to document how we built the image:
*******782.dkr.ecr.us-east-1.amazonaws.com/library/kserve-huggingfaceserver:v0.16.0
The official image released by kserve had several high and critical CVEs. To build our version, use the python/huggingface_server.Dockerfile dockerfile.
$ cd python
$ docker build -t striveworks/huggingfaceserver:latest -f huggingface_server.Dockerfile .To test this image for inference, start it via:
docker run -p 8080:8080 --gpus all -it -v /root/models/gpt-oss-20b/:/mnt/models striveworks/huggingfaceserver:latest --max_length 8000 --quantization mxfp4 --model_name gpt --trust-remote-code --enforce-eager --enable-auto-tool-choice --tool-call-parser openai --kv-cache-memory=21777640857where /root/models/gpt-oss-20b is a local path to your LLM (in this case, gpt-oss-20b). The arguments after the image name are all directly piped into the vLLM engine. See the vLLM docs for details on those.
Once you have that running, you can test inference via cURL:
curl -v http://0.0.0.0:8080/openai/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "'gpt'",
"messages": [
{"role": "system", "content": "You are a helpful assistant that speaks like Shakespeare."},
{"role": "user", "content": "Write a poem about colors"}
],
"max_tokens": 10000,
"stream": false, "reasoning_effort": "low"
}'The reasoning_effort is not available for all models.
The image:
**********782.dkr.ecr.us-east-1.amazonaws.com/library/kserve-huggingfaceserver:v0.16.0.sha256.1
is a temporary workaround to allow vLLM to work in FIPS constrained environments, where hashlib.md5 is disabled. This image was made by first building the one above, and then exec-ing into it and running the following commands:
$ cd /kserve-workspace/prod_venv/lib64/python3.12/site-packages/vllm/
$ find . -type f -exec sed -i 's/hashlib\.md5/hashlib.sha256/g' {} +This replaces all hashlib.md5 calls with hashlib.sha256. Once that change is made inside the container, that running image is committed so the changes persist.
KServe is a standardized distributed generative and predictive AI inference platform for scalable, multi-framework deployment on Kubernetes.
KServe is being used by many organizations and is a Cloud Native Computing Foundation (CNCF) incubating project.
For more details, visit the KServe website.
Single platform that unifies Generative and Predictive AI inference on Kubernetes. Simple enough for quick deployments, yet powerful enough to handle enterprise-scale AI workloads with advanced features.
Generative AI
- ๐ง LLM-Optimized: OpenAI-compatible inference protocol for seamless integration with large language models
- ๐ GPU Acceleration: High-performance serving with GPU support and optimized memory management for large models
- ๐พ Model Caching: Intelligent model caching to reduce loading times and improve response latency for frequently used models
- ๐๏ธ KV Cache Offloading: Advanced memory management with KV cache offloading to CPU/disk for handling longer sequences efficiently
- ๐ Autoscaling: Request-based autoscaling capabilities optimized for generative workload patterns
- ๐ง Hugging Face Ready: Native support for Hugging Face models with streamlined deployment workflows
Predictive AI
- ๐งฎ Multi-Framework: Support for TensorFlow, PyTorch, scikit-learn, XGBoost, ONNX, and more
- ๐ Intelligent Routing: Seamless request routing between predictor, transformer, and explainer components with automatic traffic management
- ๐ Advanced Deployments: Canary rollouts, inference pipelines, and ensembles with InferenceGraph
- โก Autoscaling: Request-based autoscaling with scale-to-zero for predictive workloads
- ๐ Model Explainability: Built-in support for model explanations and feature attribution to understand prediction reasoning
- ๐ Advanced Monitoring: Enables payload logging, outlier detection, adversarial detection, and drift detection
- ๐ฐ Cost Efficient: Scale-to-zero on expensive resources when not in use, reducing infrastructure costs
To learn more about KServe, how to use various supported features, and how to participate in the KServe community, please follow the KServe website documentation. Additionally, we have compiled a list of presentations and demos to dive through various details.
- Standard Kubernetes Installation: Compared to Serverless Installation, this is a more lightweight installation. However, this option does not support canary deployment and request based autoscaling with scale-to-zero.
- Knative Installation: KServe by default installs Knative for serverless deployment for InferenceService.
- ModelMesh Installation: You can optionally install ModelMesh to enable high-scale, high-density and frequently-changing model serving use cases.
- Quick Installation: Install KServe on your local machine.
KServe is an important addon component of Kubeflow, please learn more from the Kubeflow KServe documentation. Check out the following guides for running on AWS or on OpenShift Container Platform.
