Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/get-started/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ If you prefer [uv](https://docs.astral.sh/uv/):
uv tool install -U openshell
```

After installing the CLI, run `openshell --help` in your terminal to see the full CLI reference.
After installing the CLI, run `openshell --help` in your terminal to view the full CLI reference.

<Tip>
You can also clone the [NVIDIA OpenShell GitHub repository](https://github.com/NVIDIA/OpenShell) and use the `/openshell-cli` skill to load the CLI reference into your agent.
Expand Down
4 changes: 2 additions & 2 deletions docs/get-started/tutorials/first-network-policy.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
title: "Write Your First Sandbox Network Policy"
sidebar-title: "First Network Policy"
slug: "get-started/tutorials/first-network-policy"
description: "See how OpenShell network policies work by creating a sandbox, observing default-deny in action, and applying a fine-grained L7 read-only rule."
description: "Learn how OpenShell network policies work by creating a sandbox, observing default-deny in action, and applying a fine-grained L7 read-only rule."
keywords: "Generative AI, Cybersecurity, Tutorial, Policy, Network Policy, Sandbox, Security"
---

Expand Down Expand Up @@ -117,7 +117,7 @@ network_policies:
- { path: /usr/bin/curl }
```
The `filesystem_policy`, `landlock`, and `process` sections preserve the default sandbox settings. This is required because `policy set` replaces the entire policy. The `network_policies` section is the key part: `curl` may make GET, HEAD, and OPTIONS requests to `api.github.com` over HTTPS. Everything else is denied. The proxy auto-detects TLS on HTTPS endpoints and terminates it to inspect each HTTP request and enforce the `read-only` access preset at the method level.
The `filesystem_policy`, `landlock`, and `process` sections preserve the default sandbox settings. This is required because `policy set` replaces the entire policy. The `network_policies` section is the key part: `curl` can make GET, HEAD, and OPTIONS requests to `api.github.com` over HTTPS. Everything else is denied. The proxy auto-detects TLS on HTTPS endpoints and terminates it to inspect each HTTP request and enforce the `read-only` access preset at the method level.

Apply it:

Expand Down
6 changes: 3 additions & 3 deletions docs/get-started/tutorials/github-sandbox.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ keywords: "Generative AI, Cybersecurity, Tutorial, GitHub, Sandbox, Policy, Clau
This tutorial walks through an iterative sandbox policy workflow. You launch a sandbox, ask Claude Code to push code to GitHub, and observe the default network policy denying the request.
You then diagnose the denial from your machine and from inside the sandbox, apply a policy update, and verify that the policy update to the sandbox takes effect.

After completing this tutorial, you will have:
After completing this tutorial, you have:

- A running sandbox with Claude Code that can push to a GitHub repository.
- A custom network policy that grants GitHub access for a specific repository.
Expand Down Expand Up @@ -131,7 +131,7 @@ The sandbox runs a proxy that enforces policies on outbound traffic.
The `github_rest_api` policy allows GET requests (used to read the file)
but blocks PUT/write requests to GitHub. This is a sandbox-level restriction,
not a token issue. No matter what token you provide, pushes through the API
will be blocked until the policy is updated.
are blocked until you update the policy.
</Accordion>

Both perspectives confirm the same thing: the proxy is doing its job. The default policy is designed to be restrictive. To allow GitHub pushes, you need to update the network policy.
Expand Down Expand Up @@ -162,7 +162,7 @@ Refer to the following policy example to compare with the generated policy befor

The following YAML shows a complete policy that extends the [default policy](/reference/default-policy) with GitHub access for a single repository. Replace `<org>` with your GitHub organization or username and `<repo>` with your repository name.

The `filesystem_policy`, `landlock`, and `process` sections are static. They are read once at sandbox creation and cannot be changed by a hot-reload. They are included here for completeness so the file is self-contained, but only the `network_policies` section takes effect when you apply this to a running sandbox.
The `filesystem_policy`, `landlock`, and `process` sections are static. OpenShell reads them at sandbox creation, and a hot reload cannot change them. They are included here for completeness so the file is self-contained, but only the `network_policies` section takes effect when you apply this to a running sandbox.

```yaml
version: 1
Expand Down
2 changes: 1 addition & 1 deletion docs/get-started/tutorials/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,6 @@ Route inference through Ollama using cloud-hosted or local models, and verify it

<Card title="Local Inference with LM Studio" href="/get-started/tutorials/local-inference-lmstudio">

Route inference to a local LM Studio server via the OpenAI or Anthropic compatible APIs.
Route inference to a local LM Studio server using the OpenAI-compatible or Anthropic-compatible APIs.
</Card>
</Cards>
18 changes: 9 additions & 9 deletions docs/get-started/tutorials/inference-ollama.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,12 @@ description: "Run local and cloud models inside an OpenShell sandbox using the O
keywords: "Generative AI, Cybersecurity, Tutorial, Inference Routing, Ollama, Local Inference, Sandbox"
---

This tutorial covers two ways to use Ollama with OpenShell:
This tutorial covers two ways of running Ollama with OpenShell:

1. **Ollama sandbox (recommended)** — a self-contained sandbox with Ollama, Claude Code, and Codex pre-installed. One command to start.
2. **Host-level Ollama** — run Ollama on the gateway host and route sandbox inference to it. Useful when you want a single Ollama instance shared across multiple sandboxes.
1. Ollama sandbox. This is the recommended way to run Ollama. A self-contained sandbox with Ollama, Claude Code, and Codex pre-installed. One command starts it.
2. Host-level Ollama. This is an alternative way to run Ollama. Run Ollama on the gateway host and route sandbox inference to it. Use this option when you want a single Ollama instance shared across multiple sandboxes.

After completing this tutorial, you will know how to:
After completing this tutorial, you know how to:

- Launch the Ollama community sandbox for a batteries-included experience.
- Use `ollama launch` to start coding agents inside a sandbox.
Expand Down Expand Up @@ -190,11 +190,11 @@ The response should be JSON from the model.

Common issues and fixes:

- **Ollama not reachable from sandbox** — Ollama must be bound to `0.0.0.0`, not `127.0.0.1`. This applies to host-level Ollama only; the community sandbox handles this automatically.
- **`OPENAI_BASE_URL` wrong** — Use `http://host.openshell.internal:11434/v1`, not `localhost` or `127.0.0.1`.
- **Model not found** — Run `ollama ps` to confirm the model is loaded. Run `ollama pull <model>` if needed.
- **HTTPS vs HTTP** — Code inside sandboxes must call `https://inference.local`, not `http://`.
- **AMD GPU driver issues** — Ollama v0.18+ requires ROCm 7 drivers for AMD GPUs. Update your drivers if you see GPU detection failures.
- **Ollama not reachable from sandbox:** Ollama must be bound to `0.0.0.0`, not `127.0.0.1`. This applies to host-level Ollama only; the community sandbox handles this automatically.
- **`OPENAI_BASE_URL` wrong:** Use `http://host.openshell.internal:11434/v1`, not `localhost` or `127.0.0.1`.
- **Model not found:** Run `ollama ps` to confirm the model is loaded. Run `ollama pull <model>` if needed.
- **HTTPS instead of HTTP:** Code inside sandboxes must call `https://inference.local`, not `http://`.
- **AMD GPU driver issues:** Ollama v0.18+ requires ROCm 7 drivers for AMD GPUs. Update your drivers if you see GPU detection failures.

Useful commands:

Expand Down
8 changes: 4 additions & 4 deletions docs/get-started/tutorials/local-inference-lmstudio.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ The LM Studio server provides easy setup with both OpenAI and Anthropic compatib

</Note>

This tutorial will cover:
This tutorial covers:

- Expose a local inference server to OpenShell sandboxes.
- Verify end-to-end inference from inside a sandbox.
Expand Down Expand Up @@ -54,11 +54,11 @@ lms daemon up

Start the LM Studio local server from the Developer tab, and verify the OpenAI-compatible endpoint is enabled.

LM Studio will listen to `127.0.0.1:1234` by default. For use with OpenShell, you'll need to configure LM Studio to listen on all interfaces (`0.0.0.0`).
LM Studio listens to `127.0.0.1:1234` by default. For use with OpenShell, configure LM Studio to listen on all interfaces (`0.0.0.0`).

If you're using the GUI, go to the Developer Tab, select Server Settings, then enable Serve on Local Network.
If you use the GUI, go to the Developer Tab, select Server Settings, then enable Serve on Local Network.

If you're using llmster in headless mode, run `lms server start --bind 0.0.0.0`.
If you use llmster in headless mode, run `lms server start --bind 0.0.0.0`.

## Test with a small model

Expand Down
4 changes: 2 additions & 2 deletions docs/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ uncontrolled network activity.

Install OpenShell and create your first sandbox in two commands.

{/*Terminal demo styles live in fern/main.css — inline <style> with { } breaks MDX/acorn*/}
{/*Terminal demo styles live in fern/main.css. Inline <style> with { } breaks MDX/acorn*/}
<llms-ignore>

<div className="nc-term">
Expand Down Expand Up @@ -139,7 +139,7 @@ Keep inference traffic private by routing API calls to local or self-hosted back

<Card title="Observability" href="/observability">

Understand sandbox logs, access them via CLI and TUI, and export OCSF JSON records.
Understand sandbox logs, access them with the CLI and TUI, and export OCSF JSON records.

<Badge intent="tip" minimal outlined>How-To</Badge>
</Card>
Expand Down
2 changes: 1 addition & 1 deletion docs/index.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ navigation:
- folder: get-started/tutorials
skip-slug: true
- folder: sandboxes
title: "How It Works"
title: "Manage OpenShell"
- folder: observability
title: "Observability"
- folder: kubernetes
Expand Down
6 changes: 3 additions & 3 deletions docs/kubernetes/access-control.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ helm upgrade openshell \
--set server.oidc.userRole=openshell-user
```

`adminRole` and `userRole` must both be set or both be empty — setting only one is not supported.
Both `adminRole` and `userRole` must be set, or both must be empty. Setting only one is not supported.

### Provider-specific rolesClaim paths

Expand All @@ -76,7 +76,7 @@ helm upgrade openshell \

## Reverse-Proxy Auth Termination

When an access proxysuch as Cloudflare Access, ngrok, or a corporate SSO gateway handles authentication in front of the OpenShell gateway, you can disable the gateway's own client certificate verification:
When an access proxy, such as Cloudflare Access, ngrok, or a corporate SSO gateway, handles authentication in front of the OpenShell gateway, you can disable the gateway's own client certificate verification:

```shell
helm upgrade openshell \
Expand All @@ -99,7 +99,7 @@ To also disable TLS entirely (when the proxy terminates TLS before the request r
Only disable TLS and gateway auth when the gateway is not reachable from outside the cluster and the proxy path is fully trusted. Never expose a plaintext, auth-disabled gateway to a public network.
</Warning>

Register the gateway with the CLI using the proxy's public URL — the browser-based login flow runs automatically on first use:
Register the gateway with the CLI using the proxy's public URL. The browser-based login flow runs automatically on first use:

```shell
openshell gateway add https://gateway.example.com --name production
Expand Down
2 changes: 1 addition & 1 deletion docs/kubernetes/ingress.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ After the Gateway is provisioned, Envoy Gateway creates a LoadBalancer service i
kubectl -n openshell get svc -l gateway.envoyproxy.io/owning-gateway-name=openshell
```

Once the `EXTERNAL-IP` is assigned, register the gateway with the CLI:
After the `EXTERNAL-IP` is assigned, register the gateway with the CLI:

```shell
openshell gateway add http://<external-ip> --name production
Expand Down
4 changes: 2 additions & 2 deletions docs/kubernetes/managing-certificates.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,13 @@ The OpenShell gateway requires mTLS certificates for sandbox supervisors and cli

| Mode | When to use |
|---|---|
| Built-in `pkiInitJob` (default) | Simplest path. A pre-install Kubernetes Job generates a self-signed CA and certificates once at install time. No additional dependencies. |
| Built-in `pkiInitJob` (default) | The default path. A pre-install Kubernetes Job generates a self-signed CA and certificates during installation. No additional dependencies. |
| cert-manager | Production deployments that need automatic certificate rotation managed by a running controller. |

The rest of this page covers switching to cert-manager. The built-in mode requires no configuration.

<Note>
cert-manager and `pkiInitJob` are mutually exclusive. The chart will fail if both are enabled at the same time.
cert-manager and `pkiInitJob` are mutually exclusive. The chart fails if both are enabled at the same time.
</Note>

## Install cert-manager
Expand Down
6 changes: 3 additions & 3 deletions docs/kubernetes/openshift.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,6 @@ openshell status

## Next Steps

- For TLS-enabled deployments, see [Managing Certificates](/kubernetes/managing-certificates) once SCC-compatible PKI is supported.
- To expose the gateway externally, see [Ingress](/kubernetes/ingress).
- To configure OIDC authentication, see [Access Control](/kubernetes/access-control).
- For TLS-enabled deployments, refer to [Managing Certificates](/kubernetes/managing-certificates) after SCC-compatible PKI is supported.
- To expose the gateway externally, refer to [Ingress](/kubernetes/ingress).
- To configure OIDC authentication, refer to [Access Control](/kubernetes/access-control).
24 changes: 12 additions & 12 deletions docs/kubernetes/setup.mdx
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
---
# SPDX-FileCopyrightText: Copyright (c) 2025-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
title: "Get Started on Kubernetes"
title: "Set Up OpenShell on Kubernetes"
sidebar-title: "Setup"
description: "Deploy the OpenShell gateway to a Kubernetes cluster using the official Helm chart from GHCR."
keywords: "Generative AI, Cybersecurity, Kubernetes, Helm, Gateway, Deployment, OCI, GHCR, Installation"
position: 1
---

<Warning>
The OpenShell Helm chart is experimental and under active development. Templates, values, and defaults may change between releases. Do not use it in production.
The OpenShell Helm chart is experimental and under active development. Templates, values, and defaults can change between releases. Do not use it in production.
</Warning>

Use the Kubernetes deployment when the gateway should run on a shared cluster, in a cloud environment, or as part of team infrastructure. The Helm chart deploys the gateway as a StatefulSet and handles PKI bootstrap, RBAC, and sandbox namespace setup automatically.
Expand All @@ -20,11 +20,11 @@ Make sure the following are in place before you install.

| Prerequisite | Required | Notes |
|---|---|---|
| Kubernetes 1.29+ with RBAC enabled | Yes | |
| Helm 3.x | Yes | |
| Agent Sandbox controller and CRDs | Yes | Install before the OpenShell chart — see [Install Agent Sandbox](#install-agent-sandbox) below |
| cert-manager | No | See [Managing Certificates](/kubernetes/managing-certificates)only needed if you prefer cert-manager over the built-in PKI job |
| Kubernetes Gateway API | No | See [Ingress](/kubernetes/ingress)only needed for external access without port-forwarding |
| Kubernetes 1.29+ with RBAC enabled | Yes | No additional notes. |
| Helm 3.x | Yes | No additional notes. |
| Agent Sandbox controller and CRDs | Yes | Install before the OpenShell chart. Refer to [Install Agent Sandbox](#install-agent-sandbox). |
| cert-manager | No | Refer to [Managing Certificates](/kubernetes/managing-certificates). Use cert-manager only if you prefer it over the built-in PKI job. |
| Kubernetes Gateway API | No | Refer to [Ingress](/kubernetes/ingress). Use it only for external access without port-forwarding. |

## Install Agent Sandbox

Expand Down Expand Up @@ -94,7 +94,7 @@ kubectl -n openshell port-forward svc/openshell 8080:8080
```

<Warning>
The port-forward is for local evaluation only. For shared environments, expose the gateway through your ingress controller or access proxy. See [Ingress](/kubernetes/ingress) for an external access option.
The port-forward is for local evaluation only. For shared environments, expose the gateway through your ingress controller or access proxy. Refer to [Ingress](/kubernetes/ingress) for an external access option.
</Warning>

## Install the client mTLS certificate
Expand Down Expand Up @@ -150,7 +150,7 @@ helm upgrade --install openshell \

## Next Steps

- To enable automatic certificate rotation with cert-manager, see [Managing Certificates](/kubernetes/managing-certificates).
- To expose the gateway externally without port-forwarding, see [Ingress](/kubernetes/ingress).
- To configure OIDC or reverse-proxy authentication, see [Access Control](/kubernetes/access-control).
- To create your first sandbox, see [Manage Sandboxes](/sandboxes/manage-sandboxes).
- To enable automatic certificate rotation with cert-manager, refer to [Managing Certificates](/kubernetes/managing-certificates).
- To expose the gateway externally without port-forwarding, refer to [Ingress](/kubernetes/ingress).
- To configure OIDC or reverse-proxy authentication, refer to [Access Control](/kubernetes/access-control).
- To create your first sandbox, refer to [Manage Sandboxes](/sandboxes/manage-sandboxes).
2 changes: 1 addition & 1 deletion docs/observability/accessing-logs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ You can also run a one-off command without an interactive shell:
openshell sandbox connect my-sandbox -- cat /var/log/openshell.2026-04-01.log
```

The log files inside the sandbox contain the complete record, including events that may have been dropped from the gRPC push channel under load. The push channel is bounded and drops events rather than blocking.
The log files inside the sandbox contain the complete record, including events that the gRPC push channel can drop under load. The push channel is bounded and drops events rather than blocking.

## Filtering by Event Type

Expand Down
4 changes: 2 additions & 2 deletions docs/observability/logging.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Internal operational events use Rust's `tracing` framework with a conventional f
2026-04-01T03:28:39.175Z INFO openshell_sandbox: Creating OPA engine from proto policy data
```

These events cover startup plumbing, gRPC communication, and internal state transitions that are useful for debugging but don't represent security-relevant decisions.
These events cover startup plumbing, gRPC communication, and internal state transitions that are useful for debugging but do not represent security-relevant decisions.

### OCSF structured events

Expand All @@ -40,7 +40,7 @@ In the log file, OCSF events appear in a shorthand format with an `OCSF` level l

The `OCSF` label at column 25 distinguishes structured events from standard `INFO` tracing at the same position. Both formats appear in the same file.

When viewed through the CLI or TUI, which receive logs via gRPC, the same distinction applies:
When viewed through the CLI or TUI, which receive logs through gRPC, the same distinction applies:

```text
[1775014132.118] [sandbox] [OCSF ] [ocsf] NET:OPEN [INFO] ALLOWED /usr/bin/curl(58) -> api.github.com:443 [policy:github_api engine:opa]
Expand Down
Loading
Loading