Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
13 changes: 13 additions & 0 deletions .github/labeler.yml
Original file line number Diff line number Diff line change
Expand Up @@ -108,4 +108,17 @@ Sentinel:
- changed-files:
- any-glob-to-any-file: [
'content/sentinel/**'
]

# Add 'HCP Docs' label to changes under 'content/hcp-docs'
#
# Label | Rule
# --------------- | ------------------------------------------------------------
# HCP Docs | Default; applies to all doc updates

HCP Docs:
- any:
- changed-files:
- any-glob-to-any-file: [
'content/hcp-docs/**'
]
212 changes: 212 additions & 0 deletions content/hcp-docs/content/docs/boundary/audit-logging.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,212 @@
---
page_title: Audit log streaming
description: |-
Set up audit log streaming for HCP Boundary with AWS CloudWatch or Datadog.
---

# Audit log streaming

HCP Boundary supports near real-time streaming of audit events to existing customer managed accounts of supported providers. Audit events capture all create, list, update, and delete operations performed by an authenticated Boundary client (Desktop, CLI, or the browser-based admin UI) on any of the following Boundary resources:

- Sessions
- Scopes
- Workers
- Credential stores, credential libraries, credentials
- Auth methods, roles, managed groups, groups, users, accounts, grants
- Host catalogs, host sets, host, targets

The captured data includes the user ID of the user performing the operation, the timestamp, and the full request and response payloads.

Audit logs allow administrators to track user activity and enable security teams to ensure compliance in accordance with regulatory requirements.

The documentation outlines the steps required to enable and configure audit log streaming to the supported providers AWS CloudWatch and Datadog. You can stream logs to one account at a time.

## Configure streaming with AWS CloudWatch

To configure audit log streaming with AWS CloudWatch, you must create an [IAM role](https://docs.aws.amazon.com/iam/?id=docs_gateway) that HCP Boundary can use to send logs to AWS CloudWatch. Below are the steps to create the IAM role with necessary configuration.

### Create IAM policy

1. Launch [AWS Management Console](https://console.aws.amazon.com/) and navigate to **IAM > Policies**, and click **Create policy**.
1. Choose **JSON** and enter the following policy in the policy editor.

```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "HCPLogStreaming",
"Effect": "Allow",
"Action": [
"logs:PutLogEvents",
"logs:DescribeLogStreams",
"logs:DescribeLogGroups",
"logs:CreateLogStream",
"logs:CreateLogGroup",
"logs:TagLogGroup"
],
"Resource": "*"
}
]
}
```

1. Click **Next**.
1. Enter a name for the new policy, for example, `hcp-log-streaming`.
1. Click **Create policy** to create the IAM policy.

### Configure the IAM role

Before you create a new IAM role, get the HashiCorp generated external ID from the HCP Portal.

1. Launch the [HCP Portal](https://portal.cloud.hashicorp.com/).
1. Navigate to Boundary, and select your cluster.
1. Select **Audit logs**.
![Enable audit log streaming](/img/docs/boundary/enable-logs.png)
1. Click **Enable log streaming**.
1. Select **AWS CloudWatch**.
1. Copy the **External ID** value.
![HCP Portal - audit log streaming page](/img/docs/boundary/ui-audit-log-streaming.png)
You will need this value during the IAM role creation.

Next, create the IAM role using AWS Management Console or HashiCorp Terraform.

<Tabs>
<Tab heading="AWS Management Console">

1. Launch **AWS Management Console** and navigate to **IAM > Roles**, and click **Create role**.
1. For **Trusted entity type**, select **AWS account**.
1. For **An AWS account**, select **Another AWS account**.
1. Enter **711430482607** in the **Account ID** field.
1. Under **Options**, select **Require external ID**.
1. Enter the **External ID** value you copied from the [HCP portal](https://portal.cloud.hashicorp.com/).
1. Click **Next**.
1. Select the policy you created earlier, and click **Next** to attach the policy to the role.
1. Click **Create role** to complete.


</Tab>
<Tab heading="Terraform">

Use the following Terraform configuration to create the IAM role necessary to enable audit log streaming.

```hcl
data "aws_iam_policy_document" "allow_hcp_to_stream_logs" {
statement {
effect = "Allow"
actions = [
"logs:PutLogEvents", # To write logs to cloudwatch
"logs:DescribeLogStreams", # To get the latest sequence token of a log stream
"logs:DescribeLogGroups", # To check if a log group already exists
"logs:CreateLogGroup", # To create a new log group
"logs:CreateLogStream" # To create a new log stream
]
resources = [
"*"
]
}
}

data "aws_iam_policy_document" "trust_policy" {
statement {
sid = "HCPLogStreaming"
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
identifiers = ["711430482607"]
type = "AWS"
}
condition {
test = "StringEquals"
variable = "sts:ExternalId"
values = [
"<ExternalID-generated-by-Hashicorp>"
]
}
}
}

resource "aws_iam_role" "role" {
name = "hcp-log-streaming"
description = "iam role that allows hcp to send logs to cloudwatch logs"
assume_role_policy = data.aws_iam_policy_document.trust_policy.json
inline_policy {
name = "inline-policy"
policy = data.aws_iam_policy_document.allow_hcp_to_stream_logs.json
}
}
```

</Tab>
</Tabs>

Once you have created the IAM role, you can configure the audit log streaming in HCP Boundary.

1. Launch the [HCP Portal](https://portal.cloud.hashicorp.com/).
1. From the HCP Boundary **Overview** page, select the **Audit logs** view.
1. Click **Enable log streaming**.
1. Select **AWS CloudWatch**.
![AWS audit log configuration](/img/docs/boundary/aws-enable-logs.png)
1. Under the **CloudWatch configuration** section, enter your **Destination name**, and **Role ARN**.
1. Select the **Region** that matches where you want your data stored.
1. Click **Save**.

Logs should arrive within your AWS CloudWatch environment in a few minutes after Boundary usage.

HashiCorp dynamically creates the log group and log streams for you. You can find the log group in your AWS CloudWatch with the prefix `/hashicorp` after setting up your configuration. The log group lets you filter the HashiCorp generated logs separately from other logs you may have in CloudWatch.

Refer to the [AWS documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) for details on log exploration.

## Configure streaming with Datadog

To configure audit log streaming with Datadog, you will need the following:

- The region your Datadog account is in
- Your Datadog [API key](https://docs.datadoghq.com/account_management/api-app-keys/)

Complete the following steps:

1. Launch the [HCP Portal](https://portal.cloud.hashicorp.com/).
1. Navigate to Boundary, and select your cluster.
1. Select **Audit logs**.
![Enable audit log streaming](/img/docs/boundary/enable-logs.png)
1. Click **Enable log streaming**.
1. Select **Datadog**.
![Datadog audit log configuration](/img/docs/boundary/datadog-enable-logs.png)
1. Under the **Datadog configuration**, enter your **Destination name** and **API Key**.
1. Select the **Datadog site region** that matches your existing Datadog environment.
1. Click **Save**

Logs should arrive within your Datadog environment in a few minutes after using Boundary.
Refer to the [Datadog documentation](https://docs.datadoghq.com/getting_started/logs/#explore-your-logs) for details on log exploration.

## Test your streaming configuration

During the streaming configuration setup, you can test that the streaming configuration is working within HCP. Testing the configuration can be helpful when you want to verify you entered the correct credentials and other parameters on the configuration page. To test the configuration, enter the parameters for the logging provider you want to test, then click **Test connection**.

![Test Connection button](/img/docs/boundary/test-connection.png)

HCP sends a test message to the logging provider and shares the status of success or failure on the **Enable log streaming** page.

You can also test the configuration when you update a streaming configuration that you have already configured.

## Update your streaming configuration

You can update the configuration of the existing audit log streaming. For example, you may need to rotate a secret used for your logging provider, or you may need to switch from one logging provider to another.

1. Launch the [HCP Portal](https://portal.cloud.hashicorp.com/).
1. Navigate to Boundary, and select your cluster.
1. Select **Audit logs**.
1. Select **Edit streaming configuration** under the **Manage** menu.
![Update Connection menu](/img/docs/boundary/update-connection.png)

You can:
- Select a new provider
- Enter new parameters for the provider
- Test the connection by selecting **Test connection**

1. Click **Save**.

## Retention

HCP Boundary stores the audit logs for a minimum of one year within the platform. HCP began archiving audit logs in October of 2022. The logs are available after the deletion of the cluster that created them. Please submit a request to the [HashiCorp Help Center](https://support.hashicorp.com/hc/en-us/requests/new) if you need access to logs from deleted clusters or have further questions.
24 changes: 24 additions & 0 deletions content/hcp-docs/content/docs/boundary/configure-ttl.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
---
page_title: Configure auth token time to live (TTL)
description: >-
Learn how to configure the time to live (TTL) for the auth token that Boundary controllers issue.
---

# Configure authentication time to live

You can configure the time-to-live (TTL) and time-to-stale (TTS) settings that control how often Boundary requires a user to authenticate.
The TTL setting controls the lifespan of an auth token, while the TTS setting controls how long Boundary permits an auth token to be inactive.

Complete the following steps to configure the time-to-live and time-to-stale settings for any auth tokens your HCP controllers issue.

1. Log in to [the HCP Portal](https://portal.cloud.hashicorp.com/), and navigate to the **Overview** page for the Boundary cluster you want to configure.
1. In the **Controller configuration** section, click **Edit**.
1. Complete the following fields on the **Auth Token TTL** tab:
- **Time to Live**: Enter the number of hours you want to let auth tokens be valid before requiring a user to authenticate again.
Click **Set to default** to set the time-to-live setting to the default value.
- **Time to Stale**: Enter the number of hours you want to let auth tokens be inactive before requiring a user to authenticate again.
Click **Set to default** to set the time-to-stale setting to the default value.
1. Click **Save**.

The updated settings apply to any new sessions you create.
You can view the updated settings on the cluster **Overview** page in the **Controller configuration** section.
50 changes: 50 additions & 0 deletions content/hcp-docs/content/docs/boundary/how-boundary-works.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
---
page_title: How HCP Boundary Works
description: |-
Describe the access model and deployment options of HCP Boundary.
---

# How HCP Boundary works

HCP Boundary is an intelligent proxy that automates user and host onboarding, and provisions access permissions. Boundary creates a workflow for
accessing infrastructure remotely with a number of key steps:

- **User authentication:** Integrates with trusted identity platforms (such as Azure Active Directory, Okta, Ping,
[and many others that support OpenID Connect](/boundary/tutorials/access-management/oidc-auth)).
- **Granular user authorization:** Allows operators to tightly control access to remote systems, and the actions against those systems.
- **Automated connections to hosts:** As you deploy or update workloads, HCP Boundary updates connections to targets and hosts using automated service discovery. Dynamic host catalogs are available with [AWS](/boundary/docs/concepts/host-discovery/aws), [Azure](/boundary/docs/concepts/host-discovery/azure), and [GCP](/boundary/docs/concepts/host-discovery/gcp). This is critical in ephemeral, cloud-based environments so that operators don't need to reconfigure access lists.
- **Integrated credential management:** HCP Boundary brokers access to target credentials natively or via integration with
[HashiCorp Vault](/boundary/tutorials/access-management/oss-vault-cred-brokering-quickstart).
- **Time-limited network access to targets:** Boundary provides time-limited proxies to private endpoints, avoiding the need to expose your network to users.
- **Session monitoring and management:** Provides visibility into the sessions Boundary creates.


## Access model

HCP Boundary provides a solution to protect and safeguard access to applications and critical systems by leveraging trusted identities, without exposing the underlying network. HCP Boundary is an identity-aware proxy that sits between users and the infrastructure they wish to connect.

The proxy has two components:

- **Controllers:** manage state for users, hosts, and access policies, and the external providers HCP Boundary can query for service discovery.
- **Workers:** are a stateless proxy with end-network access to hosts under management. The control plane assigns each worker node to a target system once an authenticated user selects the target to connect.

The session starts for the user as a TCP tunnel wrapped in mutual TLS. This mitigates the risk of a man-in-the-middle attack. If a user is connecting to a
host over SSH through an HCP Boundary tunnel, there are two layers of encryption: the SSH session that user creates, and the underlying TLS that HCP Boundary creates.

![Diagram of user requests flow through the worker node before HCP Boundary connects the user to the target system.](/img/docs/boundary/access-model.png)


## Deployment options

HCP Boundary is fully managed by HashiCorp, but organizations can choose to self-manage Boundary workers (Boundary's gateway nodes). Self-managed workers enable
organizations to proxy all session data through their own networks, while still providing the convenience of a managed service. In the standard fully-managed
deployment model, HashiCorp manages the control plane and worker nodes, making it easy to get started with Boundary while facilitating scaling over time.

### Self-managed workers

Self-managed workers allow Boundary users to securely connect to private endpoints without exposing an organization's networks to the public, or to HashiCorp-managed
resources. The organization's worker nodes proxy all session activities. To learn more about self-managed workers see the
self-managed workers [tutorial](/boundary/tutorials/hcp-administration/hcp-manage-workers) and
[operations document](/hcp/docs/boundary/self-managed-workers).

![Diagram of user requests flow through the self-managed worker before connecting to the target system.](/img/docs/boundary/self-managed-workers.png)
41 changes: 41 additions & 0 deletions content/hcp-docs/content/docs/boundary/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
---
page_title: Overview
sidebar_title: Overview
description: |-
This topic provides an overview of HCP Boundary, HashiCorp's secure access management solution.
---

# What is HCP Boundary?

HCP Boundary is a fully-managed, cloud-based, workflow that enables secure connections to remote hosts and critical systems across cloud and on-premise environments.
As a managed service, HCP Boundary enables zero-trust networking without needing to manage the underlying infrastructure. To get started with HCP Boundary today,
visit [our onboarding guide](/boundary/tutorials/hcp-getting-started).

![Boundary Overview](/img/docs/boundary/boundary-overview.png)

## Why HCP Boundary?

HCP Boundary reduces the complexity of managing access to infrastructure, and enables the user to simply log in, select the desired host or system, and connect.
HCP Boundary handles the routing, connections, and credential brokering on the backend. Users securely connect to their remote systems or hosts without exposing
a credential, address, or network.

The need for secure remote access to dynamic environments is growing rapidly, and today's solutions (such as VPNs, SSH bastions, and PAM) fail to scale effectively
in ephemeral, multi-cloud environments. Current solutions are complex and require multiple network addresses, credentials, permissions, and expertise for users to
access remote hosts and systems. These solutions commonly grant users access to entire networks and credentials, vastly increasing the attack surface area. Users
oftentimes require multiple credentials for the network, hosts, and possibly services. With ephemeral environments, maintaining addresses is even more brittle and
onboarding or offboarding users is often manual, resulting in increased overhead, tickets with helpdesk, and time to value.

**For administrators:** Boundary provides a simple way for verified users to have secure access to cloud and self-managed infrastructures without exposing your network
or the use of managing credentials. Boundary fully automates workflows for both user and target onboarding, which drastically minimizes the configuration overhead
for operators and, unlike traditional access solutions, enables them to keep users and targets up-to-date in cloud environments. HCP Boundary makes it even easier
to take advantage of Boundary by removing the operational overhead of managing it in your environment.

**For developers:** Boundary offers developers a standardized workflow for connecting to their infrastructure, wherever it resides. Boundary's consistent access
workflow removes the need to manage target credentials or target network addresses. This increases developer productivity by reducing time spent figuring out how
to connect to remote systems. Boundary's automated service discovery provides an easier and faster experience for accessing dynamic infrastructure.

## Tutorial
Refer to the Getting Started with [HCP Boundary](/boundary/tutorials/hcp-getting-started) tutorial to get hands-on with HCP Boundary
and set up your managed Boundary environment.

<HCPCallout product="boundary" />
32 changes: 32 additions & 0 deletions content/hcp-docs/content/docs/boundary/maintenance-window.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
---
page_title: Configure maintenance windows
description: |-
This topic describes how to configure a maintenance windnow for HCP upgrades and updates.
---

# Configure maintenance windows

HCP Boundary automatically updates your environment when a newer version of Boundary is released.
You can alternatively schedule maintenance windows for updates to ensure that your end users' productivity is not disrupted during peak hours.
If you schedule a maintenance window, HCP Boundary waits until the day and time you selected to apply any patch or major version updates.

1. Log in to [the HCP Portal](https://portal.cloud.hashicorp.com/), and navigate to the **Overview** page for the Boundary cluster you want to configure.
1. Click **Manage**, and then select **Edit configuration**.
1. On the **Maintenance window** tab, select between the three options to control when your cluster is updated:
- **Automatic**: Updates the cluster automatically wnen a new version of Boundary is released for HCP.
- **Manual**: Allows you to update the cluster to new versions manually.
If you select **Manual** you will receive an email when a new version of Boundary is available.
When there is a pending update, a banner displays information about the new release on the **Overview** page.
![The banner that displays when there is a pending update.](/img/docs/boundary/new-version-available.png)

<Warning>

If you select **Manual**, but you do not update to a new version within 30 days, the update happens automatically.

</Warning>

- **Scheduled**: Allows you to select a day and time window for the update to occur. Note that times are listed in UTC.
1. Click **Save**.

HCP Boundary will now apply updates according to the maintenance window you configured.
HCP Portal users also receive an email notification when a cluster has been updated successfully.
Loading
Loading