diff --git a/.github/labeler.yml b/.github/labeler.yml index 4bbdeca759..befeb8b9aa 100644 --- a/.github/labeler.yml +++ b/.github/labeler.yml @@ -108,4 +108,17 @@ Sentinel: - changed-files: - any-glob-to-any-file: [ 'content/sentinel/**' + ] + +# Add 'HCP Docs' label to changes under 'content/hcp-docs' +# +# Label | Rule +# --------------- | ------------------------------------------------------------ +# HCP Docs | Default; applies to all doc updates + +HCP Docs: +- any: + - changed-files: + - any-glob-to-any-file: [ + 'content/hcp-docs/**' ] \ No newline at end of file diff --git a/content/hcp-docs/content/docs/boundary/audit-logging.mdx b/content/hcp-docs/content/docs/boundary/audit-logging.mdx new file mode 100644 index 0000000000..bd09c37c63 --- /dev/null +++ b/content/hcp-docs/content/docs/boundary/audit-logging.mdx @@ -0,0 +1,212 @@ +--- +page_title: Audit log streaming +description: |- + Set up audit log streaming for HCP Boundary with AWS CloudWatch or Datadog. +--- + +# Audit log streaming + +HCP Boundary supports near real-time streaming of audit events to existing customer managed accounts of supported providers. Audit events capture all create, list, update, and delete operations performed by an authenticated Boundary client (Desktop, CLI, or the browser-based admin UI) on any of the following Boundary resources: + +- Sessions +- Scopes +- Workers +- Credential stores, credential libraries, credentials +- Auth methods, roles, managed groups, groups, users, accounts, grants +- Host catalogs, host sets, host, targets + +The captured data includes the user ID of the user performing the operation, the timestamp, and the full request and response payloads. + +Audit logs allow administrators to track user activity and enable security teams to ensure compliance in accordance with regulatory requirements. + +The documentation outlines the steps required to enable and configure audit log streaming to the supported providers AWS CloudWatch and Datadog. You can stream logs to one account at a time. + +## Configure streaming with AWS CloudWatch + +To configure audit log streaming with AWS CloudWatch, you must create an [IAM role](https://docs.aws.amazon.com/iam/?id=docs_gateway) that HCP Boundary can use to send logs to AWS CloudWatch. Below are the steps to create the IAM role with necessary configuration. + +### Create IAM policy + +1. Launch [AWS Management Console](https://console.aws.amazon.com/) and navigate to **IAM > Policies**, and click **Create policy**. +1. Choose **JSON** and enter the following policy in the policy editor. + + ```json + { + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "HCPLogStreaming", + "Effect": "Allow", + "Action": [ + "logs:PutLogEvents", + "logs:DescribeLogStreams", + "logs:DescribeLogGroups", + "logs:CreateLogStream", + "logs:CreateLogGroup", + "logs:TagLogGroup" + ], + "Resource": "*" + } + ] + } + ``` + +1. Click **Next**. +1. Enter a name for the new policy, for example, `hcp-log-streaming`. +1. Click **Create policy** to create the IAM policy. + +### Configure the IAM role + +Before you create a new IAM role, get the HashiCorp generated external ID from the HCP Portal. + +1. Launch the [HCP Portal](https://portal.cloud.hashicorp.com/). +1. Navigate to Boundary, and select your cluster. +1. Select **Audit logs**. + ![Enable audit log streaming](/img/docs/boundary/enable-logs.png) +1. Click **Enable log streaming**. +1. Select **AWS CloudWatch**. +1. Copy the **External ID** value. + ![HCP Portal - audit log streaming page](/img/docs/boundary/ui-audit-log-streaming.png) + You will need this value during the IAM role creation. + +Next, create the IAM role using AWS Management Console or HashiCorp Terraform. + + + + +1. Launch **AWS Management Console** and navigate to **IAM > Roles**, and click **Create role**. +1. For **Trusted entity type**, select **AWS account**. +1. For **An AWS account**, select **Another AWS account**. +1. Enter **711430482607** in the **Account ID** field. +1. Under **Options**, select **Require external ID**. +1. Enter the **External ID** value you copied from the [HCP portal](https://portal.cloud.hashicorp.com/). +1. Click **Next**. +1. Select the policy you created earlier, and click **Next** to attach the policy to the role. +1. Click **Create role** to complete. + + + + + +Use the following Terraform configuration to create the IAM role necessary to enable audit log streaming. + +```hcl +data "aws_iam_policy_document" "allow_hcp_to_stream_logs" { + statement { + effect = "Allow" + actions = [ + "logs:PutLogEvents", # To write logs to cloudwatch + "logs:DescribeLogStreams", # To get the latest sequence token of a log stream + "logs:DescribeLogGroups", # To check if a log group already exists + "logs:CreateLogGroup", # To create a new log group + "logs:CreateLogStream" # To create a new log stream + ] + resources = [ + "*" + ] + } +} + +data "aws_iam_policy_document" "trust_policy" { + statement { + sid = "HCPLogStreaming" + effect = "Allow" + actions = ["sts:AssumeRole"] + principals { + identifiers = ["711430482607"] + type = "AWS" + } + condition { + test = "StringEquals" + variable = "sts:ExternalId" + values = [ + "" + ] + } + } +} + +resource "aws_iam_role" "role" { + name = "hcp-log-streaming" + description = "iam role that allows hcp to send logs to cloudwatch logs" + assume_role_policy = data.aws_iam_policy_document.trust_policy.json + inline_policy { + name = "inline-policy" + policy = data.aws_iam_policy_document.allow_hcp_to_stream_logs.json + } +} +``` + + + + +Once you have created the IAM role, you can configure the audit log streaming in HCP Boundary. + +1. Launch the [HCP Portal](https://portal.cloud.hashicorp.com/). +1. From the HCP Boundary **Overview** page, select the **Audit logs** view. +1. Click **Enable log streaming**. +1. Select **AWS CloudWatch**. + ![AWS audit log configuration](/img/docs/boundary/aws-enable-logs.png) +1. Under the **CloudWatch configuration** section, enter your **Destination name**, and **Role ARN**. +1. Select the **Region** that matches where you want your data stored. +1. Click **Save**. + +Logs should arrive within your AWS CloudWatch environment in a few minutes after Boundary usage. + +HashiCorp dynamically creates the log group and log streams for you. You can find the log group in your AWS CloudWatch with the prefix `/hashicorp` after setting up your configuration. The log group lets you filter the HashiCorp generated logs separately from other logs you may have in CloudWatch. + +Refer to the [AWS documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) for details on log exploration. + +## Configure streaming with Datadog + +To configure audit log streaming with Datadog, you will need the following: + +- The region your Datadog account is in +- Your Datadog [API key](https://docs.datadoghq.com/account_management/api-app-keys/) + +Complete the following steps: + +1. Launch the [HCP Portal](https://portal.cloud.hashicorp.com/). +1. Navigate to Boundary, and select your cluster. +1. Select **Audit logs**. + ![Enable audit log streaming](/img/docs/boundary/enable-logs.png) +1. Click **Enable log streaming**. +1. Select **Datadog**. + ![Datadog audit log configuration](/img/docs/boundary/datadog-enable-logs.png) +1. Under the **Datadog configuration**, enter your **Destination name** and **API Key**. +1. Select the **Datadog site region** that matches your existing Datadog environment. +1. Click **Save** + +Logs should arrive within your Datadog environment in a few minutes after using Boundary. +Refer to the [Datadog documentation](https://docs.datadoghq.com/getting_started/logs/#explore-your-logs) for details on log exploration. + +## Test your streaming configuration + +During the streaming configuration setup, you can test that the streaming configuration is working within HCP. Testing the configuration can be helpful when you want to verify you entered the correct credentials and other parameters on the configuration page. To test the configuration, enter the parameters for the logging provider you want to test, then click **Test connection**. + + ![Test Connection button](/img/docs/boundary/test-connection.png) + +HCP sends a test message to the logging provider and shares the status of success or failure on the **Enable log streaming** page. + +You can also test the configuration when you update a streaming configuration that you have already configured. + +## Update your streaming configuration + +You can update the configuration of the existing audit log streaming. For example, you may need to rotate a secret used for your logging provider, or you may need to switch from one logging provider to another. + +1. Launch the [HCP Portal](https://portal.cloud.hashicorp.com/). +1. Navigate to Boundary, and select your cluster. +1. Select **Audit logs**. +1. Select **Edit streaming configuration** under the **Manage** menu. + ![Update Connection menu](/img/docs/boundary/update-connection.png) + + You can: + - Select a new provider + - Enter new parameters for the provider + - Test the connection by selecting **Test connection** + +1. Click **Save**. + +## Retention + +HCP Boundary stores the audit logs for a minimum of one year within the platform. HCP began archiving audit logs in October of 2022. The logs are available after the deletion of the cluster that created them. Please submit a request to the [HashiCorp Help Center](https://support.hashicorp.com/hc/en-us/requests/new) if you need access to logs from deleted clusters or have further questions. diff --git a/content/hcp-docs/content/docs/boundary/configure-ttl.mdx b/content/hcp-docs/content/docs/boundary/configure-ttl.mdx new file mode 100644 index 0000000000..dc02b6ac44 --- /dev/null +++ b/content/hcp-docs/content/docs/boundary/configure-ttl.mdx @@ -0,0 +1,24 @@ +--- +page_title: Configure auth token time to live (TTL) +description: >- + Learn how to configure the time to live (TTL) for the auth token that Boundary controllers issue. +--- + +# Configure authentication time to live + +You can configure the time-to-live (TTL) and time-to-stale (TTS) settings that control how often Boundary requires a user to authenticate. +The TTL setting controls the lifespan of an auth token, while the TTS setting controls how long Boundary permits an auth token to be inactive. + +Complete the following steps to configure the time-to-live and time-to-stale settings for any auth tokens your HCP controllers issue. + +1. Log in to [the HCP Portal](https://portal.cloud.hashicorp.com/), and navigate to the **Overview** page for the Boundary cluster you want to configure. +1. In the **Controller configuration** section, click **Edit**. +1. Complete the following fields on the **Auth Token TTL** tab: + - **Time to Live**: Enter the number of hours you want to let auth tokens be valid before requiring a user to authenticate again. + Click **Set to default** to set the time-to-live setting to the default value. + - **Time to Stale**: Enter the number of hours you want to let auth tokens be inactive before requiring a user to authenticate again. + Click **Set to default** to set the time-to-stale setting to the default value. +1. Click **Save**. + +The updated settings apply to any new sessions you create. +You can view the updated settings on the cluster **Overview** page in the **Controller configuration** section. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/boundary/how-boundary-works.mdx b/content/hcp-docs/content/docs/boundary/how-boundary-works.mdx new file mode 100644 index 0000000000..88fa875739 --- /dev/null +++ b/content/hcp-docs/content/docs/boundary/how-boundary-works.mdx @@ -0,0 +1,50 @@ +--- +page_title: How HCP Boundary Works +description: |- + Describe the access model and deployment options of HCP Boundary. +--- + +# How HCP Boundary works + +HCP Boundary is an intelligent proxy that automates user and host onboarding, and provisions access permissions. Boundary creates a workflow for +accessing infrastructure remotely with a number of key steps: + +- **User authentication:** Integrates with trusted identity platforms (such as Azure Active Directory, Okta, Ping, +[and many others that support OpenID Connect](/boundary/tutorials/access-management/oidc-auth)). +- **Granular user authorization:** Allows operators to tightly control access to remote systems, and the actions against those systems. +- **Automated connections to hosts:** As you deploy or update workloads, HCP Boundary updates connections to targets and hosts using automated service discovery. Dynamic host catalogs are available with [AWS](/boundary/docs/concepts/host-discovery/aws), [Azure](/boundary/docs/concepts/host-discovery/azure), and [GCP](/boundary/docs/concepts/host-discovery/gcp). This is critical in ephemeral, cloud-based environments so that operators don't need to reconfigure access lists. +- **Integrated credential management:** HCP Boundary brokers access to target credentials natively or via integration with +[HashiCorp Vault](/boundary/tutorials/access-management/oss-vault-cred-brokering-quickstart). +- **Time-limited network access to targets:** Boundary provides time-limited proxies to private endpoints, avoiding the need to expose your network to users. +- **Session monitoring and management:** Provides visibility into the sessions Boundary creates. + + +## Access model + +HCP Boundary provides a solution to protect and safeguard access to applications and critical systems by leveraging trusted identities, without exposing the underlying network. HCP Boundary is an identity-aware proxy that sits between users and the infrastructure they wish to connect. + +The proxy has two components: + +- **Controllers:** manage state for users, hosts, and access policies, and the external providers HCP Boundary can query for service discovery. +- **Workers:** are a stateless proxy with end-network access to hosts under management. The control plane assigns each worker node to a target system once an authenticated user selects the target to connect. + +The session starts for the user as a TCP tunnel wrapped in mutual TLS. This mitigates the risk of a man-in-the-middle attack. If a user is connecting to a +host over SSH through an HCP Boundary tunnel, there are two layers of encryption: the SSH session that user creates, and the underlying TLS that HCP Boundary creates. + +![Diagram of user requests flow through the worker node before HCP Boundary connects the user to the target system.](/img/docs/boundary/access-model.png) + + +## Deployment options + +HCP Boundary is fully managed by HashiCorp, but organizations can choose to self-manage Boundary workers (Boundary's gateway nodes). Self-managed workers enable +organizations to proxy all session data through their own networks, while still providing the convenience of a managed service. In the standard fully-managed +deployment model, HashiCorp manages the control plane and worker nodes, making it easy to get started with Boundary while facilitating scaling over time. + +### Self-managed workers + +Self-managed workers allow Boundary users to securely connect to private endpoints without exposing an organization's networks to the public, or to HashiCorp-managed +resources. The organization's worker nodes proxy all session activities. To learn more about self-managed workers see the +self-managed workers [tutorial](/boundary/tutorials/hcp-administration/hcp-manage-workers) and +[operations document](/hcp/docs/boundary/self-managed-workers). + +![Diagram of user requests flow through the self-managed worker before connecting to the target system.](/img/docs/boundary/self-managed-workers.png) diff --git a/content/hcp-docs/content/docs/boundary/index.mdx b/content/hcp-docs/content/docs/boundary/index.mdx new file mode 100644 index 0000000000..700c5a5a65 --- /dev/null +++ b/content/hcp-docs/content/docs/boundary/index.mdx @@ -0,0 +1,41 @@ +--- +page_title: Overview +sidebar_title: Overview +description: |- + This topic provides an overview of HCP Boundary, HashiCorp's secure access management solution. +--- + +# What is HCP Boundary? + +HCP Boundary is a fully-managed, cloud-based, workflow that enables secure connections to remote hosts and critical systems across cloud and on-premise environments. +As a managed service, HCP Boundary enables zero-trust networking without needing to manage the underlying infrastructure. To get started with HCP Boundary today, +visit [our onboarding guide](/boundary/tutorials/hcp-getting-started). + +![Boundary Overview](/img/docs/boundary/boundary-overview.png) + +## Why HCP Boundary? + +HCP Boundary reduces the complexity of managing access to infrastructure, and enables the user to simply log in, select the desired host or system, and connect. +HCP Boundary handles the routing, connections, and credential brokering on the backend. Users securely connect to their remote systems or hosts without exposing +a credential, address, or network. + +The need for secure remote access to dynamic environments is growing rapidly, and today's solutions (such as VPNs, SSH bastions, and PAM) fail to scale effectively +in ephemeral, multi-cloud environments. Current solutions are complex and require multiple network addresses, credentials, permissions, and expertise for users to +access remote hosts and systems. These solutions commonly grant users access to entire networks and credentials, vastly increasing the attack surface area. Users +oftentimes require multiple credentials for the network, hosts, and possibly services. With ephemeral environments, maintaining addresses is even more brittle and +onboarding or offboarding users is often manual, resulting in increased overhead, tickets with helpdesk, and time to value. + +**For administrators:** Boundary provides a simple way for verified users to have secure access to cloud and self-managed infrastructures without exposing your network +or the use of managing credentials. Boundary fully automates workflows for both user and target onboarding, which drastically minimizes the configuration overhead +for operators and, unlike traditional access solutions, enables them to keep users and targets up-to-date in cloud environments. HCP Boundary makes it even easier +to take advantage of Boundary by removing the operational overhead of managing it in your environment. + +**For developers:** Boundary offers developers a standardized workflow for connecting to their infrastructure, wherever it resides. Boundary's consistent access +workflow removes the need to manage target credentials or target network addresses. This increases developer productivity by reducing time spent figuring out how +to connect to remote systems. Boundary's automated service discovery provides an easier and faster experience for accessing dynamic infrastructure. + +## Tutorial +Refer to the Getting Started with [HCP Boundary](/boundary/tutorials/hcp-getting-started) tutorial to get hands-on with HCP Boundary +and set up your managed Boundary environment. + + diff --git a/content/hcp-docs/content/docs/boundary/maintenance-window.mdx b/content/hcp-docs/content/docs/boundary/maintenance-window.mdx new file mode 100644 index 0000000000..7e160a826e --- /dev/null +++ b/content/hcp-docs/content/docs/boundary/maintenance-window.mdx @@ -0,0 +1,32 @@ +--- +page_title: Configure maintenance windows +description: |- + This topic describes how to configure a maintenance windnow for HCP upgrades and updates. +--- + +# Configure maintenance windows + +HCP Boundary automatically updates your environment when a newer version of Boundary is released. +You can alternatively schedule maintenance windows for updates to ensure that your end users' productivity is not disrupted during peak hours. +If you schedule a maintenance window, HCP Boundary waits until the day and time you selected to apply any patch or major version updates. + +1. Log in to [the HCP Portal](https://portal.cloud.hashicorp.com/), and navigate to the **Overview** page for the Boundary cluster you want to configure. +1. Click **Manage**, and then select **Edit configuration**. +1. On the **Maintenance window** tab, select between the three options to control when your cluster is updated: + - **Automatic**: Updates the cluster automatically wnen a new version of Boundary is released for HCP. + - **Manual**: Allows you to update the cluster to new versions manually. + If you select **Manual** you will receive an email when a new version of Boundary is available. + When there is a pending update, a banner displays information about the new release on the **Overview** page. + ![The banner that displays when there is a pending update.](/img/docs/boundary/new-version-available.png) + + + + If you select **Manual**, but you do not update to a new version within 30 days, the update happens automatically. + + + + - **Scheduled**: Allows you to select a day and time window for the update to occur. Note that times are listed in UTC. +1. Click **Save**. + + HCP Boundary will now apply updates according to the maintenance window you configured. + HCP Portal users also receive an email notification when a cluster has been updated successfully. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/boundary/security-model.mdx b/content/hcp-docs/content/docs/boundary/security-model.mdx new file mode 100644 index 0000000000..f70ab9679c --- /dev/null +++ b/content/hcp-docs/content/docs/boundary/security-model.mdx @@ -0,0 +1,186 @@ +--- +page_title: Security Model +description: |- + Security model of HCP Boundary and the security controls available to end users. +--- + +# Security Model + +This document explains the security model of HCP Boundary and the security +controls available to end users. Additionally, it provides best practices for +HCP Boundary-specific features such as self-managed workers, identity +integration with HCP, and describes the multi-tenant architecture and +mitigations to address known threats. + +-> **NOTE:** Boundary and HCP Boundary are still under active development, and the security model content will be updated over time. + +## Key Terms + +Definition of commonly used acronyms and terms. + +| Term/Acronym | Definition | +|---------------|--------------------------------------------------------------| +| HCP | HashiCorp Cloud Platform | +| HCPb | HCP Boundary | +| Identity | Principles such as users and groups | + +For more information about the HCP platform, refer to the [HCP +documentation](/hcp/docs/hcp). Other Boundary-specific +concepts can be found in the [Boundary’s +documentation](/boundary/docs/concepts). + +## Personas + +- **SRE Operators:** Site reliability engineers and infrastructure operators + charged with deploying Boundary and managing the availability of the service. + In HCP Boundary, operators are HashiCorp employees. + +- **Administrators:** Security administrators, or “admins”, are responsible for + defining and enabling their organization’s digital security posture. They + define access policies to critical targets and prove compliance. These admins + may already be administrators in their HCP organization and expect to have + default permissions in Boundary without needing to reconfigure a + Boundary-specific user profile. + +- **Users:** Human users that need to connect to infrastructure targets, but + typically do so outside of programmatic interfaces. For HCP Boundary GA, this + is emphatically a technical user and not a standard corporate/business user. + This will primarily be personas that require infrastructure access, such as + systems administrators, IT admins, database analysts, developers, and devops + engineers. End-users may not have an HCP identity profile prior to deployment + of HCP Boundary. + +- **Anonymous or unauthenticated users:** Human users who can reach available + auth methods and use the auth method to attempt login to Boundary. This user + can list scopes, but cannot read them. + +## High-level Architecture + +HCP Boundary is deployed onto a single AWS region across three availability +zones in that region. Each customer cluster is deployed as a Nomad job of Docker +containers. The Nomad jobs are controlled by an external service that accesses +the Nomad cluster through the VPC’s PrivateLink. + +![HCP Boundary Architecture Diagram](/img/docs/boundary/boundary-architecture.png) + +### Public-facing URLs + +For a given HCP Boundary cluster, the only user-accessible endpoints are the +controllers, which have a randomly-generated, 32-character cluster UUID (e.g., +`https://.boundary.hashicorp.cloud`). These machine-generated URLs +provide no discernable patterns, guarding against enumeration of controllers. + +![HCP Portal - Cluster ID](/img/docs/boundary/cluster-id.png) + +## Storage + +Boundary controller and worker infrastructure is stateless, whereby all state +lives in the RDBMS. Each Boundary cluster is provided with a separate database +inside of an Aurora Postgres cluster. Access to the database is provided by the +[Vault database engine](/vault/docs/secrets/databases) and +with dynamic credentials that are regularly rotated. + +![High level view of components used HCP Boundary](/img/docs/boundary/data-at-rest.png) + +### Tenancy Model + +HCP Boundary uses a multi-tenant RDS Postgres cluster with a separate database +per tenant. This architecture allows us to use security controls inherent to +Postgres's database isolation. All secret and sensitive row data is encrypted +with scope specific per-tenant keys (more information in the +[Data Encryption](#data-encryption) section). This is commonly +[referred](https://aws.amazon.com/blogs/database/multi-tenant-data-isolation-with-postgresql-row-level-security/) +to as a _Siloed_ multi-tenant database, as opposed to _Bridging_ or _Pooling_. A +silo model allows us to maintain the strictest security while simplifying the +architecture. + +### Data Encryption + +HCP Boundary clusters use the [Vault Transit secrets +engine](/vault/docs/secrets/transit) for their KMS keys +(root, recovery, worker-auth). Boundary controllers are provided access to the +Vault transit keys with a token that is assigned a policy that allows them to +access only their individual keys. These tokens are regularly rotated. + +Administrators may also use an external Key Management System, including Vault +and HCP Vault, to manage the key-encrypting (root) key. More information about +supported external KMS systems is available at the [Boundary documentation +page](/boundary/docs/configuration/kms). + + +## Self-Managed Worker + +Self-managed workers are workers that are managed by +administrators outside of HCP infrastructure, in their cloud or on-premise +environment. Just like all Boundary worker-to-controller and client-to-worker +communication, self-managed workers connect to the controller and clients +over mutually-authenticated TLS. More information about authentication of +self-managed workers to the HCP Boundary controller can be found at the +[Boundary documentation +page](/boundary/docs/concepts/security/connections-tls#worker-led-pki-based-registration). + +!> **Caution:** A compromised worker may result in the compromise of the targets +assigned to this worker, as well as the integrity of the log data provided by +the compromised worker. + +## Data in Transit + +![High-level diagram of Boundary](/img/docs/boundary/data-in-transit.png) + +### Boundary Session Traffic + +All User-to-Controller communications are done over TLS. TLS configuration +options are available at the [Boundary documentation +page](/boundary/docs/configuration/listener/tcp#tls). All +other communication (worker-to-controller and client-to-worker are done over +mutually-authenticated TLS). These keys are automatically generated and managed +by Boundary. More information about the use of TLS in Boundary, refer to the +[TLS in +Boundary](/boundary/docs/concepts/security/connections-tls#tls-in-boundary) +documentation. + +## Identity + +### HashiCorp Cloud Platform + +The HCP Platform allows administrators to perform high-level cluster lifecycle +operations such as cluster creation and deletion. HCP users and their +permissions can be managed through [the HCP Portal](https://portal.cloud.hashicorp.com/). + +![IAM navigation](/img/docs/iam-nav.png) + +-> **NOTE:** Once an HCP Boundary cluster is created, [Boundary users and +permissions](/boundary/tutorials/oss-administration/oss-manage-users-groups) +are managed directly within Boundary itself. + +### Boundary + +Boundary provides its own identity multiple authentication methods (auth +methods) as well as a fine-grained RBAC model: + +- Users are authenticated to Boundary Controllers using a Boundary-configured + [authentication + method](/boundary/docs/concepts/domain-model/auth-methods). +- [Boundary’s permission + model](/boundary/docs/concepts/security/permissions) + provides administrators the ability to granularly define role-based access + control to Boundary’s resources, including explicitly assigning permissions to + anonymous users. +- Authentication and authorization of all internal Boundary traffic is managed + using mutually-authenticated TLS as described in the [TLS in + Boundary](/boundary/docs/concepts/security/connections-tls) + documentation. + +### Cluster Creation + +When an HCP cluster tenant is created by an Organization’s administrator, they +will be prompted to create administrative credentials to bootstrap the cluster. +They may then use Boundary-specific authentication methods to connect directly +to the controller and perform administrative tasks. + +## Audit + +Boundary provides the ability to log all events. More information about the +supported log sinks is available in the [events configuration +documentation](/boundary/docs/configuration/events). +HCP Boundary logs can be sent to [external log ingestion systems](/hcp/docs/boundary/audit-logging). diff --git a/content/hcp-docs/content/docs/boundary/self-managed-workers/index.mdx b/content/hcp-docs/content/docs/boundary/self-managed-workers/index.mdx new file mode 100644 index 0000000000..462d4f3f63 --- /dev/null +++ b/content/hcp-docs/content/docs/boundary/self-managed-workers/index.mdx @@ -0,0 +1,111 @@ +--- +page_title: Self-managed worker operations +description: |- + Learn how self-managed workers let HCP Boundary users securely connect to private endpoints without exposing their organizations' networks to the public. +--- + +# Self-managed worker operations + +Self-managed workers allow Boundary users to securely connect to private endpoints without exposing their organizations' networks to the public, or to HashiCorp-managed resources. +The self-managed worker nodes proxy any session activity. + +Self-managed workers use public key infrastructure (PKI) for authentication. +They authenticate to Boundary using a certificate-based method that allows you to deploy workers without using a shared key management service (KMS). +For more information about authorizing and authenticating workers, refer to [Worker configuration](/boundary/docs/configuration/worker/worker-configuration). + +This page outlines operational guidance for running self-managed workers with HCP Boundary in production. + +## Network requirements for Boundary workers + +Boundary workers proxy connections to remote endpoints. +Workers can either proxy connections to target endpoints, proxy connections from Boundary control plane traffic to private Vault environments and other peer services, or both. + +In multi-hop sessions, you can use a chain of workers to proxy connections to targets. +Multi-hop configurations are helpful in situations where inbound traffic to the target's network is not allowed. +A worker in a private network sends outbound communication to its upstream worker, and can then create reverse proxies to establish sessions. + +The following sections describe worker network connectivity requirements depending on what the worker is used for: + +- [Proxying target connections](#workers-proxying-target-connections) +- [Proxying Vault connections](#workers-proxying-vault-connections) +- [Proxying multi-hop sessions](#workers-proxying-multi-hop-sessions) + +### Workers proxying target connections + +There are three network connectivity requirements for workers that proxy connections to targets: + +1. Outbound access to an existing trusted Boundary control point using port 9202 (either another trusted worker or the Boundary control plane, or in other words, the cluster url) +1. Outbound access to the target +1. Inbound access from client trying to establish the session + +The third requirement does not require exposure to the public internet, just inbound access from clients. +Consider the case of Boundary being accessed by clients from a private corporate network (not public internet) to facilitate connections to a separate private datacenter network: + +- The worker would need outbound connectivity to a trusted Boundary control point (either another trusted worker or the Boundary control plane, ie the origin url) +- The worker would need outbound connectivity to the host network (the datacenter network or cloud VPC) for which it can make outbound (worker->host) calls to hosts +- The worker would need to allow inbound (client->worker) connections from the client's network (this would be the corporate network, not public internet in this scenario) + +### Workers proxying Vault connections + +When proxying connections to private Vault clusters, workers have two network connectivity requirements: + +1. Outbound access to an existing trusted Boundary control point (either another trusted worker or the Boundary control plane, ie the origin url) +1. Outbound access to the destination private Vault + +The following diagram illustrates worker connectivity directionality based on the requirements above for HCP Boundary with self-managed workers. + +![Boundary Access Model](/img/docs/boundary/direction-of-network-traffic.png) + +### Workers proxying multi-hop sessions + +In multi-hop sessions, the workers can serve three different functions: + +1. **Ingress worker** - An ingress worker is a worker that is accessible by the client. +The client initiates the connection to the ingress worker. +1. **Intermediary worker** - An optional intermediary worker sits between ingress and egress workers as part of a multi-hop chain. +There can be multiple intermediary workers as part of a multi-hop chain. +1. **Egress worker** - An egress worker is a worker that can access the target. +The egress worker initiates reverse proxy connections to intermediary or ingress workers. + +The functions are general ways to describe how workers interact with resources. +A worker can perform more than one function, if it meets the requirements. +For more information, refer to [Multi-hop sessions](/boundary/docs/concepts/workers#multi-hop-sessions-hcp-ent). + +When you proxy connections to targets in multi-hop sessions, the ingress, intermediary, and egress workers have the following additional requirements. + +#### Ingress workers + +Similar to single layer workers, ingress workers in a multi-hop session must have: + +- Outbound access to the Boundary control plane on port 9202 +- Inbound access from clients + +#### Intermediary workers + +In a multi-hop session, intermediary workers require: + +- Outbound access to an upstream worker + + The upstream worker may be an ingress worker or another intermediary worker. + Any upstream, intermediary worker must eventually connect to an ingress worker using trusted intermediary workers. + +- Inbound access from a downstream worker + + The downstream worker may be an egress worker or another downstream worker. + Any downstream, intermediary worker must eventually connect to an egress worker using trusted intermediary workers. + +#### Egress workers + +In a multi-hop session, the egress workers on the target's side require: + +- Outbound access to an upstream worker +- Outbound access to the destination host + + Inbound session connections from clients reach the egress worker via reverse proxy with the ingress worker as initiated from the egress worker. + +The following diagram illustrates the direction of worker connectivity in a a multi-hop session. +The arrows show the direction in which network communication is initiated. +The white lines represent the Boundary cluster's control plane traffic. +The red lines represent the direction of a user's session traffic. + +![Self-managed workers in a multi-hop session](/img/docs/boundary/HCP-multi-hop-arch.png) diff --git a/content/hcp-docs/content/docs/boundary/self-managed-workers/install-self-managed-workers.mdx b/content/hcp-docs/content/docs/boundary/self-managed-workers/install-self-managed-workers.mdx new file mode 100644 index 0000000000..d50e033531 --- /dev/null +++ b/content/hcp-docs/content/docs/boundary/self-managed-workers/install-self-managed-workers.mdx @@ -0,0 +1,501 @@ +--- +page_title: Install self-managed workers +description: |- + Learn how to install and start your own HCP Boundary self-managed workers to securely connect to private endpoints without exposing your network to the public. +--- + +# Install self-managed workers + +HCP Boundary allows organizations to register and manage their own workers. +You can deploy these [self-managed +workers](/hcp/docs/boundary/self-managed-workers/) in private networks, and they +can communicate with an upstream HCP Boundary cluster. + +For a step-by-step example of configuring a self-managed worker instance, refer +to the self-managed workers +[tutorial](/boundary/tutorials/hcp-administration/hcp-manage-workers). + +To install and configure a self-managed worker, complete the procedures below. + +## Download the Boundary Enterprise binary + + + + +1. Navigate to the Boundary [releases page](https://releases.hashicorp.com/boundary/) and download the latest Boundary Enterprise binary for your operating system. + + For Linux there are multiple versions of the binary available, based on + distro and architecture. Select the correct package to download the + zip to your local machine. Then, extract the + `boundary` binary. + + Alternatively, refer to the examples below for installing the latest version of the `boundary-enterprise` package using a package manager. + + + + + ```shell-session + $ curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg + $ echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list + $ sudo apt update && sudo apt install boundary-enterprise -y + ``` + + + + + ```shell-session + $ sudo yum install -y yum-utils + $ sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo + $ sudo yum -y install boundary-enterprise + ``` + + + + + ```shell-session + $ sudo dnf install -y dnf-plugins-core + $ sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo + $ sudo dnf -y install boundary-enterprise + ``` + + + + ```shell-session + $ sudo yum install -y yum-utils shadow-utils + $ sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo + $ sudo yum -y install boundary-enterprise + ``` + + + +1. After downloading the binary, ensure the `boundary` version matches the HCP Boundary control plane's version in order to benefit from the latest HCP Boundary features. + + Use the following command to verify the version: + + ```shell-session + $ boundary version + + Version information: + Build Date: 2024-11-18T16:04:45Z + Git Revision: d648fb7e0fe80d45df04faa165161ede74014888 + Metadata: ent + Version Number: 0.18.1+ent + ``` + + + + +1. Navigate to the Boundary boundary [releases page](https://releases.hashicorp.com/boundary/), and download the latest Boundary Enterprise binary for your operating system. + + For MacOS there are two versions of the binary available, based on whether + your processor is AMD64 (x86_64) or ARM64. + + Select the correct package to download the boundary zip to your local + machine. Then extract the `boundary` binary. + + Alternatively, you can use the following commands to download the + `boundary` binary. + + Note that this example uses an AMD64 processor and the 0.11.2 version of the + HCP Boundary control plane. Update these values to match the binary needed + for your environment. + + ```shell-session + $ wget -q https://releases.hashicorp.com/boundary/0.13.0+ent/boundary_0.13.0+ent_darwin_amd64.zip ;\ + /usr/bin/unzip *.zip + ``` + +1. After downloading the binary, ensure the `boundary` version matches the HCP Boundary control plane's version in order to benefit from the latest HCP Boundary features. + + Use the following command to verify the version: + + ```shell-session + $ ./boundary version + + Version information: + Build Date: 2024-11-18T16:04:45Z + Git Revision: d648fb7e0fe80d45df04faa165161ede74014888 + Metadata: ent + Version Number: 0.18.1+ent + ``` + + + + +1. Navigate to the Boundary boundary [releases page](https://releases.hashicorp.com/boundary/), and download the latest Boundary Enterprise binary for your operating system. + + For Windows there are two versions of the binary available, based on whether + your processor is AMD64 (x86_64) or i386. + + Select the correct package to download the boundary zip to your local + machine. Then extract the `boundary` binary. + + Alternatively, you can use the following command to download and extract the + boundary binary. Note that this example uses an AMD64 processor and + the 0.13.0 version of the HCP Boundary control plane. Update these values to + match the binary needed for your environment. + + ```shell-session + $ Invoke-WebRequest -OutFile boundary.zip https://releases.hashicorp.com/boundary/0.13.0+ent/boundary_0.13.0+ent_windows_amd64.zip ; + Expand-Archive -Path boundary.zip -DestinationPath . + ``` + +1. After downloading the binary, ensure the `boundary` version matches the HCP Boundary control plane's version in order to benefit from the latest HCP Boundary features. + + Use the following command to verify the version: + + ```shell-session + $ .\boundary.exe version + + Version information: + Build Date: 2024-11-18T16:04:45Z + Git Revision: d648fb7e0fe80d45df04faa165161ede74014888 + Metadata: ent + Version Number: 0.18.1+ent + ``` + + If you want to install `boundary` as a system-wide executable, + [update your system's global + path](https://stackoverflow.com/questions/1618280/where-can-i-set-path-to-make-exe-on-windows) + to include the path to `boundary.exe`. + + + + +## Create the self-managed worker configuration file + +Next, create a self-managed worker configuration file. +Refer to the [complete configuration example](/boundary/docs/configuration/worker/worker-configuration#complete-configuration-example) to view all valid configuration options. + + + + +1. Create a new file to store the worker configuration. + + ```shell-session + $ touch worker.hcl + ``` + +1. Open the `worker.hcl` file with a text editor, such as Vi. Paste the + following configuration information into the `worker.hcl` file: + + + + ```hcl + disable_mlock = true + + hcp_boundary_cluster_id = "" + + listener "tcp" { + address = "127.0.0.1:9202" + purpose = "proxy" + } + + worker { + auth_storage_path = "/home/myusername/worker" + tags { + type = ["worker", "linux"] + } + } + ``` + + + +1. Update the configuration fields in the `worker.hcl` file as necessary. +You can specify the following configuration fields for self-managed workers: + + - The `hcp_boundary_cluster_id` field accepts a Boundary cluster ID and is + used by the worker when it initially connects to HCP Boundary. You configure + this field externally to the `worker` stanza. + + The cluster ID is the UUID in the HCP Boundary cluster URL. For example, + the cluster ID is `c3a7a20a-f663-40f3-a8e3-1b2f69b36254`, if your cluster + URL is: + + `https://c3a7a20a-f663-40f3-a8e3-1b2f69b36254.boundary.hashicorp.cloud` + + - The `listener` stanza in the example above sets the `address` port to + `0.0.0.0:9202`. This port should already be configured by the AWS security + group for this instance to accept inbound TCP connections. If you want to + use a custom listener port, you can specify it in this field. + + - If set, the `public_addr` field should match the public IP or DNS name of your self-managed worker instance. The `public_addr` does not need to be set if the worker has outbound access to an upstream worker or controller. In the unlikely event that you deploy the Boundary client and worker on the same local machine, you should omit the `public_addr` attribute. + + For an example of the Boundary client and worker being deployed on the same local machine, refer to the [Configure the worker](/boundary/tutorials/hcp-administration/hcp-manage-workers#configure-the-worker) section of the self-managed worker tutorial. + + - The `auth_storage_path` is a local path where the worker stores its + credentials. You should not share storage between workers. This field should + match the full path to the `/worker/` directory, such as: + + `/home/ubuntu/worker` + + - The `initial_upstreams` value indicates the address or addresses a worker + uses when initially connecting to Boundary. You can use `initial_upstreams` + in the `worker` stanza as an alternative to the `hcp_boundary_cluster_id`. + + For most use cases, the `hcp_boundary_cluster_id` is sufficient for ensuring that connectivity is always available, even if the HCP-managed upstream workers change. + You should only configure an `initial_upstreams` value if you want to connect this worker to another self-managed or HCP-managed worker as part of a [multi-hop sessions](/boundary/docs/concepts/connection-workflows/multi-hop) topology. + Make sure to use `hcp_boundary_cluster_id` to connect self-managed workers to HCP Boundary. + + The example above uses the `auth_storage_path` and the `hcp_boundary_cluster_id` values. + If you want to configure `initial_upstreams` instead, you should omit the `hcp_boundary_cluster_id`. + +1. Save the `worker.hcl` file. + + + + + +1. Create a new file to store the worker configuration. + + ```shell-session + $ touch worker.hcl + ``` +1. Open the `worker.hcl` file with a text editor, such as Vi. Paste the + following configuration information into the `worker.hcl` file: + + + + ```hcl + disable_mlock = true + + hcp_boundary_cluster_id = "" + + listener "tcp" { + address = "127.0.0.1:9202" + purpose = "proxy" + } + + worker { + auth_storage_path = "/Users/myusername/worker" + tags { + type = ["worker", "macos"] + } + } + ``` + + + +1. Update the configuration fields in the `worker.hcl` file as necessary. +You can specify the following configuration fields for self-managed workers: + + - The `hcp_boundary_cluster_id` field accepts a Boundary cluster ID and is + used by the worker when it initially connects to HCP Boundary. You configure + this field externally to the `worker` stanza. + + The cluster ID is the UUID in the HCP Boundary cluster URL. For example, + the cluster ID is `c3a7a20a-f663-40f3-a8e3-1b2f69b36254`, if your cluster + URL is: + + `https://c3a7a20a-f663-40f3-a8e3-1b2f69b36254.boundary.hashicorp.cloud` + + - The `listener` stanza in the example above sets the `address` port to + `0.0.0.0:9202`. This port should already be configured by the AWS security + group for this instance to accept inbound TCP connections. If you want to + use a custom listener port, you can specify it in this field. + + - If set, the `public_addr` field should match the public IP or DNS name of your self-managed worker instance. The `public_addr` does not need to be set if the worker has outbound access to an upstream worker or controller. In the unlikely event that you deploy the Boundary client and worker on the same local machine, you should omit the `public_addr` attribute. + + For an example of the Boundary client and worker being deployed on the same local machine, refer to the [Configure the worker](/boundary/tutorials/hcp-administration/hcp-manage-workers#configure-the-worker) section of the self-managed worker tutorial. + + - The `auth_storage_path` is a local path where the worker stores its + credentials. You should not share storage between workers. This field should + match the full path to the `/worker/` directory, such as: + + `/home/ubuntu/worker` + + - The `initial_upstreams` value indicates the address or addresses a worker + uses when initially connecting to Boundary. You can use `initial_upstreams` + in the `worker` stanza as an alternative to the `hcp_boundary_cluster_id`. + + For most use cases, the `hcp_boundary_cluster_id` is sufficient for ensuring that connectivity is always available, even if the HCP-managed upstream workers change. + You should only configure an `initial_upstreams` value if you want to connect to another self-managed worker. + Make sure to use `hcp_boundary_cluster_id` to connect self-managed workers to HCP Boundary. + + The example above uses the `auth_storage_path` and the `hcp_boundary_cluster_id` values. + If you want to configure `initial_upstreams` instead, you should omit the `hcp_boundary_cluster_id`. + +1. Save the `worker.hcl` file. + + + + + +1. Create a new file named `worker.hcl`: + + ```shell-session + $ touch worker.hcl + ``` + +1. Open the `worker.hcl` file with a text editor. Paste the following + configuration information into the `worker.hcl` file: + + + + ```hcl + disable_mlock = true + + hcp_boundary_cluster_id = "" + + listener "tcp" { + address = "127.0.0.1:9202" + purpose = "proxy" + } + + worker { + auth_storage_path = "C:/Users/myusername/worker" + tags { + type = ["worker", "windows"] + } + } + ``` + + +1. Update the configuration fields in the `worker.hcl` file as necessary. +You can specify the following configuration fields for self-managed workers: + + - The `hcp_boundary_cluster_id` field accepts a Boundary cluster ID and is + used by the worker when it initially connects to HCP Boundary. You configure + this field externally to the `worker` stanza. + + The cluster ID is the UUID in the HCP Boundary cluster URL. For example, + the cluster ID is `c3a7a20a-f663-40f3-a8e3-1b2f69b36254`, if your cluster + URL is: + + `https://c3a7a20a-f663-40f3-a8e3-1b2f69b36254.boundary.hashicorp.cloud` + + - The `listener` stanza in the example above sets the `address` port to + `0.0.0.0:9202`. This port should already be configured by the AWS security + group for this instance to accept inbound TCP connections. If you want to + use a custom listener port, you can specify it in this field. + + - If set, the `public_addr` field should match the public IP or DNS name of your self-managed worker instance. The `public_addr` does not need to be set if the worker has outbound access to an upstream worker or controller. In the unlikely event that you deploy the Boundary client and worker on the same local machine, you should omit the `public_addr` attribute. + + For an example of the Boundary client and worker being deployed on the same local machine, refer to the [Configure the worker](/boundary/tutorials/hcp-administration/hcp-manage-workers#configure-the-worker) section of the self-managed worker tutorial. + + - The `auth_storage_path` is a local path where the worker stores its + credentials. You should not share storage between workers. This field should + match the full path to the `/worker/` directory, such as: + + `C:/Users/Administrator/worker` + + - The `initial_upstreams` value indicates the address or addresses a worker + uses when initially connecting to Boundary. You can use `initial_upstreams` + in the `worker` stanza as an alternative to the `hcp_boundary_cluster_id`. + + For most use cases, the `hcp_boundary_cluster_id` is sufficient for ensuring that connectivity is always available, even if the HCP-managed upstream workers change. + You should only configure an `initial_upstreams` value if you want to connect to another self-managed worker. + Make sure to use `hcp_boundary_cluster_id` to connect self-managed workers to HCP Boundary. + + The example above uses the `auth_storage_path` and the `hcp_boundary_cluster_id` values. + If you want to configure `initial_upstreams` instead, you should omit the `hcp_boundary_cluster_id`. + +1. Save the `worker.hcl` file. + + + + +## Start the self-managed worker + +Once the configuration file is created, you can start the worker server. Use the +following command to start the server. You must provide the full path to the +worker configuration file, for example `/home/worker.hcl`. + +Note the `Worker Auth Registration Request:` value on line 12. You can also +locate this value in the `auth_request_token` file. You must provide this value +when you [Register a new worker with +HCP](/hcp/docs/boundary/self-managed-workers/register-self-managed-workers). + +Enter the following command to start the worker: + + + + + + ```shell-session + $ ./boundary server -config="/home/myusername/worker.hcl" + + ==> Boundary server configuration: + + Cgo: disabled + Listener 1: tcp (addr: "0.0.0.0:9202", max_request_duration: "1m30s", purpose: "proxy") + Log Level: info + Mlock: supported: true, enabled: false + Version: Boundary v0.11.2+hcp + Version Sha: f0006502c93b51291896b4c9a1d2d5290796f9ce + Worker Auth Current Key Id: knoll-unengaged-twisting-kite-envelope-dock-liftoff-legend + Worker Auth Registration Request: GzusqckarbczHoLGQ4UA25uSR7RQJqCjDfxGSJZvEpwQpE7HzYvpDJ88a4QMP3cUUeBXhS5oTgck3ZvZ3nrZWD3HxXzgq4wNScpy7WE7JmNrrGNLNEFeqqMcyhjqGJVvg2PqiZA6arL6zYLNLNCEFtRhcvG5LLMeHc3bthkrbwLg7R7TNswTjDJWmwh4peYpnKuQ9qHEuTK9fapmw4fdvRTiTbrq78ju4asvLByFTCTR3nbk62Tc15iANYsUAn9JLSxjgRXTsuTBkp4QoqBqz89pEi258Wd1ywcACBHRT3 + Worker Auth Storage Path: /home/myusername/worker + Worker Public Proxy Addr: 52.90.177.171:9202 + + ==> Boundary server started! Log data will stream in below: + + {"id":"l0UQKrAg7b","source":"https://hashicorp.com/boundary/ip-172-31-86-85/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).StartControllerConnections","data":{"msg":"Setting HCP Boundary cluster address 6f40d99c-ed7a-4f22-ae52-931a5bc79c03.proxy.boundary.hashicorp.cloud:9202 as upstream address"}},"datacontentype":"application/cloudevents","time":"2023-01-10T04:34:52.616180263Z"} + ``` + + + + + + + + ```shell-session + $ ./boundary server -config="/Users/myusername/worker.hcl" + + ==> Boundary server configuration: + + Cgo: disabled + Listener 1: tcp (addr: "0.0.0.0:9202", max_request_duration: "1m30s", purpose: "proxy") + Log Level: info + Mlock: supported: true, enabled: false + Version: Boundary v0.11.2+hcp + Version Sha: f0006502c93b51291896b4c9a1d2d5290796f9ce + Worker Auth Current Key Id: knoll-unengaged-twisting-kite-envelope-dock-liftoff-legend + Worker Auth Registration Request: GzusqckarbczHoLGQ4UA25uSRmUHY1BGSRA6cePp8RWHQFUYSrf3hnDw4ETPswFnMrcxx6tq7BUWD5azGULzPecPicuYGD6qg3qYvaGRgHgKwvh9FLY9Gu891KSj8hAef19JjHog8d7qpo9f9KoiwrhfcV2YxGyVu1P943656iNGCFHWiBR3ofsyTatQ7fzcMV2ciKtuYYGfx4FfiRStnkAzoE98RdR2LeCk2huRkFt7ayeeWVfD7Awm8xaZfFJn4pYRJwu2LRBeNs915warEBaS8XHXSKoi3cRUYif8Qu + Worker Auth Storage Path: /Users/myusername/worker + Worker Public Proxy Addr: 52.90.177.171:9202 + + ==> Boundary server started! Log data will stream in below: + + {"id":"l0UQKrAg7b","source":"https://hashicorp.com/boundary/ip-172-31-86-85/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).StartControllerConnections","data":{"msg":"Setting HCP Boundary cluster address 6f40d99c-ed7a-4f22-ae52-931a5bc79c03.proxy.boundary.hashicorp.cloud:9202 as upstream address"}},"datacontentype":"application/cloudevents","time":"2023-01-10T04:34:52.616180263Z"} + ``` + + + + + + + + ```shell-session + $ .\boundary server -config="C:\Users\myusername\worker.hcl" + + ==> Boundary server configuration: + + Cgo: disabled + Listener 1: tcp (addr: "0.0.0.0:9202", max_request_duration: "1m30s", purpose: "proxy") + Log Level: info + Mlock: supported: true, enabled: false + Version: Boundary v0.11.2+hcp + Version Sha: f0006502c93b51291896b4c9a1d2d5290796f9ce + Worker Auth Current Key Id: knoll-unengaged-twisting-kite-envelope-dock-liftoff-legend + Worker Auth Registration Request: GzusqckarbczHoLGQ4UA25uSRmUHY1BGSRA6cePp8RWHQFUYSrf3hnDw4ETPswFnMrcxx6tq7BUWD5azGULzPecPicuYGD6qg3qYvaGRgHgKwvh9FLY9Gu891KSj8hAef19JjHog8d7qpo9f9KoiwrhfcV2YxGyVu1P943656iNGCFHWiBR3ofsyTatQ7fzcMV2ciKtuYYGfx4FfiRStnkAzoE98RdR2LeCk2huRkFt7ayeeWVfD7Awm8xaZfFJn4pYRJwu2LRBeNs915warEBaS8XHXSKoi3cRUYif8Qu + Worker Auth Storage Path: C:\Users\myusername\worker + Worker Public Proxy Addr: 52.90.177.171:9202 + + ==> Boundary server started! Log data will stream in below: + + {"id":"l0UQKrAg7b","source":"https://hashicorp.com/boundary/ip-172-31-86-85/worker","specversion":"1.0","type":"system","data":{"version":"v0.1","op":"worker.(Worker).StartControllerConnections","data":{"msg":"Setting HCP Boundary cluster address 6f40d99c-ed7a-4f22-ae52-931a5bc79c03.proxy.boundary.hashicorp.cloud:9202 as upstream address"}},"datacontentype":"application/cloudevents","time":"2023-01-10T04:34:52.616180263Z"} + ``` + + + + + +The worker starts and outputs its authorization token as `Worker Auth +Registration Request`. +It is also saved to a file, `auth_request_token`, +defined by the `auth_storage_path` in the worker configuration file. + +After you install and start the self-managed worker, you must +[register](/hcp/docs/boundary/self-managed-workers/register-self-managed-workers) +it with HCP in your environment's admin console. diff --git a/content/hcp-docs/content/docs/boundary/self-managed-workers/manage-self-managed-workers.mdx b/content/hcp-docs/content/docs/boundary/self-managed-workers/manage-self-managed-workers.mdx new file mode 100644 index 0000000000..80a1701f6e --- /dev/null +++ b/content/hcp-docs/content/docs/boundary/self-managed-workers/manage-self-managed-workers.mdx @@ -0,0 +1,172 @@ +--- +page_title: Manage self-managed workers +description: |- + Learn how to view, update, and delete HCP Boundary self-managed workers to manage secure connections to your private endpoints without exposing your network. +--- + +# Manage self-managed workers + +You can use the following procedures to manage self-managed workers in HCP Boundary: + +- [View available workers](#view-available-workers) +- [View worker details](#view-worker-details) +- [Update self-managed workers](#update-self-managed-workers) +- [Delete self-managed workers](#delete-self-managed-workers) + +## View available workers + +Use the following command to view a list of any available workers: + + + +```shell-session +$ boundary workers list + +Worker information: + ID: w_r61cCrlm4M + Type: pki + Version: 1 + Address: 100.24.18.207:9202 + Last Status Time: Mon, 19 Sep 2022 22:40:02 UTC + Authorized Actions: + no-op + read + update + delete + add-worker-tags + set-worker-tags + remove-worker-tags + + ID: w_FYy3CJipUd + Type: kms + Version: 1 + Name: 797132fe-1fcb-1bb8-122c-abfb32acad39-worker + Address: 797132fe-1fcb-1bb8-122c-abfb32acad39.proxy.boundary.hashicorp.cloud:9202 + Last Status Time: Mon, 19 Sep 2022 22:40:02 UTC + Authorized Actions: + no-op + read + delete + add-worker-tags + set-worker-tags + remove-worker-tags + + ID: w_djfIunfBrR + Type: kms + Version: 1 + Name: 78eaad58-5e3f-4b04-83f6-360ef8828f07-worker + Address: 78eaad58-5e3f-4b04-83f6-360ef8828f07.proxy.boundary.hashicorp.cloud:9202 + Last Status Time: Mon, 19 Sep 2022 22:40:03 UTC + Authorized Actions: + no-op + read + delete + add-worker-tags + set-worker-tags + remove-worker-tags + + ID: w_xv0uKOxQW5 + Type: kms + Version: 1 + Name: 33c9d3bd-7326-2cf8-58ba-ee99ec43d34a-worker + Address: 33c9d3bd-7326-2cf8-58ba-ee99ec43d34a.proxy.boundary.hashicorp.cloud:9202 + Last Status Time: Mon, 19 Sep 2022 22:40:03 UTC + Authorized Actions: + no-op + read + delete + add-worker-tags + set-worker-tags + remove-worker-tags +``` + + + +## View worker details + +You can view information about the workers you have registered with HCP. +Viewing worker information can be useful if you need to copy worker details, such as the self-managed worker `ID` (`w_r61cCrlm4M` in the example below). + +Use the following command to read the worker details: + +```shell-session +$ boundary workers read -id w_r61cCrlm4M + +Worker information: + Active Connection Count: 0 + Address: 100.24.18.207:9202 + Created Time: Mon, 19 Sep 2022 16:39:44 MDT + ID: w_r61cCrlm4M + Last Status Time: 2022-09-19 22:40:41.133773 +0000 UTC + Type: pki + Updated Time: Mon, 19 Sep 2022 16:40:41 MDT + Version: 1 + + Scope: + ID: global + Name: global + Type: global + + Tags: + Configuration: + type: ["worker" "dev"] + Canonical: + type: ["worker" "dev"] + + Authorized Actions: + no-op + read + update + delete + add-worker-tags + set-worker-tags + remove-worker-tags +``` + +## Update self-managed workers + +To update a self-managed worker, issue an update request using the worker ID. +The request should include the fields to update. + +```shell-session +$ boundary workers update -id=w_r61cCrlm4M -name="worker1" -description="my first self-managed worker" + +Worker information: + Active Connection Count: 0 + Address: 100.24.18.207:9202 + Created Time: Mon, 19 Sep 2022 16:39:44 MDT + Description: my first self-managed worker + ID: w_r61cCrlm4M + Last Status Time: 2022-09-19 22:41:04.793293 +0000 UTC + Name: worker1 + Type: pki + Updated Time: Mon, 19 Sep 2022 16:41:05 MDT + Version: 2 + + Scope: + ID: global + Name: global + Type: global + + Tags: + Configuration: + type: ["worker" "dev"] + Canonical: + type: ["worker" "dev"] + + Authorized Actions: + no-op + read + update + delete + add-worker-tags + set-worker-tags + remove-worker-tags +``` + +Updating a worker will return the updated resource details. + +## Delete self-managed workers + +Use the `boundary workers delete` command and pass the worker ID, to delete a self-managed worker. +To verify deletion, check the worker no longer exists with `boundary workers list`. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/boundary/self-managed-workers/register-self-managed-workers.mdx b/content/hcp-docs/content/docs/boundary/self-managed-workers/register-self-managed-workers.mdx new file mode 100644 index 0000000000..5b9f551d12 --- /dev/null +++ b/content/hcp-docs/content/docs/boundary/self-managed-workers/register-self-managed-workers.mdx @@ -0,0 +1,155 @@ +--- +page_title: Register self-managed workers +description: |- + Learn how to register and create your own HCP Boundary self-managed workers to securely connect to private endpoints without exposing your network to the public. +--- + +# Register self-managed workers + +After you [install](/hcp/docs/boundary/self-managed-workers/install-self-managed-workers) and [start](/hcp/docs/boundary/self-managed-workers/install-self-managed-workers#start-the-self-managed-worker) the self-managed worker, you must register it with HCP. +You can use the Admin Console Web UI or Boundary CLI to register self-managed workers. + + + + +Complete the following steps to register the worker with HCP using the UI: + +1. Log in to the [HCP portal](https://portal.cloud.hashicorp.com/) as the admin user. + +1. From the HCP Portal's **Boundary** page, click **Open Admin UI**. + +1. Enter the admin username and password you created when you deployed the new instance and click **Authenticate**. + +1. Select **Workers** in the navigation pane. + +1. Click **New**. + +1. (Optional) You can construct the contents of the `worker.hcl` file on the new workers page, if you did not [create the configuration file](/hcp/docs/boundary/self-managed-workers/install-self-managed-workers#create-the-self-managed-worker-configuration-file) as part of the installation process. +Provide the following details, and Boundary constructs the worker configuration file for you: + - Boundary Cluster ID + - Worker public address + - Config file path + - Worker Tags + +1. Scroll to the bottom of the new workers page, and paste the **Worker Auth Registration Request** key. +Boundary provides you with the **Worker Auth Registration Request** key in the CLI output when you [start the self-managed worker](/hcp/docs/boundary/self-managed-workers/install-self-managed-workers#start-the-self-managed-worker). +You can also locate this value in the `auth_request_token` file. + +1. Click **Register Worker**. + +1. Click **Done**. + + The new self-managed worker appears on the **Workers** page. + + + + +Complete the following steps to register the worker with HCP using the UI: + +1. Use the following command to ensure that the `BOUNDARY_ADDR` is set as an environment variable: + + ```shell-session + $ export BOUNDARY_ADDR="https://c3a7a20a-f663-40f3-a8e3-1b2f69b36254.boundary.hashicorp.cloud" + ``` + +1. Log into the CLI as the admin user, providing the Auth Method ID, admin login +name, and admin password when prompted. + + ```shell-session + $ boundary authenticate password \ + -auth-method-id=ampw_KfLAjMS2CG \ + -login-name=admin + ``` + + Example: + + + + ```shell-session + $ boundary authenticate password \ + -auth-method-id=ampw_KfLAjMS2CG \ + -login-name=admin + Please enter the password (it will be hidden): + + Authentication information: + Account ID: acctpw_r6crEm0FgM + Auth Method ID: ampw_KfLAjMS2CG + Expiration Time: Mon, 27 Jun 2024 22:03:28 MDT + User ID: u_ysJd0LXX9T + + The token was successfully stored in the chosen keyring and is not displayed here. + ``` + + + +1. Next, use the following command to export the **Worker Auth Request Token** value as an environment variable: + + ```shell-session + $ export WORKER_TOKEN= + ``` + + Boundary provides you with the **Worker Auth Registration Request** key in the CLI output when you [start the self-managed worker](/hcp/docs/boundary/self-managed-workers/install-self-managed-workers#start-the-self-managed-worker). + You can also locate this value in the `auth_request_token` file. + + The token is used to issue a create worker request that will authorize the worker to Boundary and make it available. + Currently worker creation is only supported for self-managed workers with an authorization token. + +## Create a new self-managed worker + +Use the following command to create a new self-managed worker. +You can manage workers using the standard `boundary` CRUD commands: create, read, list, update, and delete. +Currently, you can only set addresses and tags in the worker configuration file. +Values that can be updated in the API are indicated as "Canonical". + +```shell-session +$ boundary workers create worker-led -worker-generated-auth-token=$WORKER_TOKEN + +Worker information: + Active Connection Count: 0 + Created Time: Mon, 19 Sep 2023 16:39:44 MDT + ID: w_r61cCrlm4M + Type: pki + Updated Time: Mon, 19 Sep 2023 16:39:44 MDT + Version: 1 + + Scope: + ID: global + Name: global + Type: global + + Authorized Actions: + no-op + read + update + delete + add-worker-tags + set-worker-tags + remove-worker-tags +``` + +The following fields are available on the Boundary worker resource: +- **Name:** The user-defined name for this worker. +- **Description:** The user-defined description for this worker. +- **Id:** The read-only ID for this worker. +- **Created time:** A timestamp indicating when the worker was created. +- **Last status time:** A timestamp indicating when the worker last sent data to a controller. +- **Updated time:** A timestamp indicating when this worker resource was last updated. +- **Version:** A read-only field indicating the version number for this resource. +- **Active connection count:** A read only field indicating the number of active + session connections this worker is currently proxying. +- **Scope:** The scope for this resource. +- **Scope_ID:** The ID of the scope for this resource. +- **Release version:** The release version of the worker. +- **Authorized actions:** The possible actions authorized for the current user. +- **Address:** The address Boundary uses when handling an authorize + session request. This value is never empty and is set within the worker + config file from the following values in decreasing priority: + - The value set in the `public_address` field in the `worker` stanza, if + present. + - The value set in the `address` field of the `listener` stanza with the + `"proxy"` purpose, if present. +- **Tags:** The tags set in the worker configuration file. +- **Type:** The type of worker. A read-only field. + + + \ No newline at end of file diff --git a/content/hcp-docs/content/docs/boundary/self-managed-workers/session-recording.mdx b/content/hcp-docs/content/docs/boundary/self-managed-workers/session-recording.mdx new file mode 100644 index 0000000000..98b80b3e5d --- /dev/null +++ b/content/hcp-docs/content/docs/boundary/self-managed-workers/session-recording.mdx @@ -0,0 +1,56 @@ +--- +page_title: Configure session recording +description: |- + Learn how to configure self-managed workers to record users' HCP Boundary sessions. You can view the recordings later for auditing purposes. +--- + +# Configure self-managed workers for session recording + +This feature requires HCP Boundary or Boundary Enterprise + +[Session recording](/boundary/docs/configuration/session-recording) in HCP Boundary requires at least one self-managed worker with access to local and remote storage. +You must configure any self-managed workers that you want to use for session recording. +HCP Boundary managed workers cannot be used for session recording. + +Session recording requires that you define an accessible directory as the`recording_storage_path` for storing in-progress session recordings. +On session closure, Boundary moves the local session recording to remote storage and deletes the local copy. + +Refer to the following self-managed worker configuration example: + +```hcl +disable_mlock = true + +hcp_boundary_cluster_id = "1a2b3c4c5-1a2b3c-4a5b6c-7713-1a3bc5" + +listener "tcp" { + address = "0.0.0.0:9202" + purpose = "proxy" +} + +worker { + public_addr = "" + auth_storage_path = "/var/lib/boundary" + tags { + type = ["worker", "worker-session-recording"] + } + recording_storage_path = "/local/storage/directory" +} +``` + +Update the self-managed worker configuration with the following values: + +- `cluster_id` - The HCP Boundary cluster ID. +You can obtain the cluster ID from the HCP Boundary cluster URL. +For example, in the URL `https://1a2b3c4c5-1a2b3c-4a5b6c-7713-1a3bc5.boundary.hashicorp.cloud`, the cluster ID is `1a2b3c4c5-1a2b3c-4a5b6c-7713-1a3bc5`. +- `public_addr` - The public IP address or DNS name of the self-managed worker instance you want to configure for session recording. +- `auth_storage_path` - The local path where the worker stores its credentials. +You should not share storage between workers. +- `tags` - Any key-value pairs that targets use to determine where to route connections. +- `recording_storage_path` - The local path for storing session recordings that are in progress. +When the session is closed, the recording is moved to remote storage and Boundary deletes the local copy. + +## Next steps + +1. [Register the self-managed worker](/hcp/docs/boundary/self-managed-workers/register-self-managed-workers), if it is not already registered. +1. [Create a storage bucket](/boundary/docs/configuration/session-recording/create-storage-bucket). +1. [Enable session recording on a target](/boundary/docs/configuration/session-recording/enable-session-recording). diff --git a/content/hcp-docs/content/docs/boundary/self-managed-workers/size-self-managed-workers.mdx b/content/hcp-docs/content/docs/boundary/self-managed-workers/size-self-managed-workers.mdx new file mode 100644 index 0000000000..1de057d311 --- /dev/null +++ b/content/hcp-docs/content/docs/boundary/self-managed-workers/size-self-managed-workers.mdx @@ -0,0 +1,47 @@ +--- +page_title: Recommendations for High Availability +description: |- + Learn best practices for configuring and sizing HCP Boundary self-managed workers for high availability. +--- + +# Recommendations for high availability + +Each network enclave that Boundary accesses needs at least 1 worker to provide access. +However, to ensure high availability for production use cases, we recommend at least 3 workers per network enclave. + +Worker session assignment is intelligently dictated by the Boundary control plane based on: + +- Which workers are candidates to proxy a session based on the worker's tags and the target's worker filter, and +- The health and connectivity of candidate workers, you do not need a load balancer to manage worker traffic. + +Ultimately, the constraints of your access use case and the sensitivity of workloads in each network enclave, dictate what level of redundancy and sizing you require for your workers. + +## Sizing guidelines for self-managed workers + +Sizing recommendations have been divided into two common cluster sizes: + +1. **Small** clusters are appropriate for most initial production deployments or for development and testing environments. + +1. **Large** clusters are production environments with a large number of Boundary clients. + +Worker performance is most affected by the number of concurrent sessions the worker is proxying, and the rates of data transfer within those sessions. +The size of workers is dependent on how you use Boundary. +For example, if you use Boundary for SSH connections and HTTP access to hosts, your instance selection and performance might differ somewhat significantly than if you are consistently doing large data transfers. + +Below are some general guidelines, however we recommend that as you use Boundary, you continue to monitor your cloud providers' network throughput limitations for your machine types and observe relevant metrics where possible, in addition to other host metrics, so that you can scale Boundary horizontally or vertically as needed. + +Some examples of relevant documentation might include: +- AWS: [EC2 Network Performance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/general-purpose-instances.html#general-purpose-network-performance) + and [Monitoring EC2 Network Performance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-network-performance-ena.html) +- Azure: [Azure Virtual Machine Throughput](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-machine-network-throughput) + and [Accelerated Network for Azure VMs](https://docs.microsoft.com/en-us/azure/virtual-network/create-vm-accelerated-networking-cli) +- GCP: [Network Bandwidth](https://cloud.google.com/compute/docs/network-bandwidth) and [About Machine Families](https://cloud.google.com/compute/docs/machine-types) + +| **Provider** | **Size** | **Instance/VM Types** | +|----------------------------|------------------------|-----------------------------------| +| AWS | Small | m5.large, m5.xlarge | +| | Large | m5n.2xlarge, m5n.4xlarge | +| Azure | Small | Standard_D2s_v3, Standard_D4s_v3 | +| | Large | Standard_D8s_v3, Standard_D16s_v3 | +| GCP | Small | n2-standard-2, n2-standard-4 | +| | Large | n2-standard-8, n2-standard-16 | \ No newline at end of file diff --git a/content/hcp-docs/content/docs/boundary/support-policy.mdx b/content/hcp-docs/content/docs/boundary/support-policy.mdx new file mode 100644 index 0000000000..797e842f8f --- /dev/null +++ b/content/hcp-docs/content/docs/boundary/support-policy.mdx @@ -0,0 +1,41 @@ +--- +page_title: HCP Boundary Support Policy for Self-Managed Workers & Boundary Clients +description: |- + This topic provides recommendations for upgrading and maintaining HCP Boundary self-managed worker and Boundary client versions +--- + +# Support Policy + +HCP Boundary environments function optimally when workers and controllers are running the same version. +The following section outlines API compatibility and hot-fix policies for self-managed workers in HCP Boundary environments. + +## Controller and worker API compatibility +HCP Boundary only supports API backwards compatibility between HCP Boundary and self-managed workers from the prior “major release”. Using a worker with version that is newer than the control plane they connect to is not supported. +A major release is identified by a change in the first (X) or second (Y) digit in the following versioning nomenclature: Version X.Y.Z. +All self-managed workers within an environment must be on the same version. + +For example, Boundary self-managed workers version 0.11.0 are compatible with HCP Boundary environments running Boundary 0.12.0. +However, they will not have compatibility once the HCP Boundary control plane is updated to version 0.13.0 or above. + +## Control plane and client/cli compatibility + +The supported version compatibility between HCP Boundary and Boundary clients/cli is the same as HCP Boundary and self-managed workers. HCP Boundary supports API backwards compatibility between the control plane and clients from the prior “major release”. Using clients on newer versions than the control plane they are registered with is not supported. + +For example, Boundary clients version 0.14.0 are compatible with Boundary control plane running Boundary 0.15.0. +However, they will not have compatibility once the control plane is updated to version 0.16.0 or above. +Boundary clients version 0.16.0 are not compatible with Boundary control plane running Boundary 0.15.0 or lower. +Customers are recommended to run the latest versions of Boundary in order to leverage the newest features and bug fixes. + +## Security and other bug fixes +Eligible code-fixes and hot-fixes for HCP Boundary self-managed workers are only provided via a new minor release (Z) on top of the latest “major release” branch. + +## Shared responsibilities +HashiCorp is responsible for keeping the customers’ HCP Boundary control plane versions up to date with the latest release of Boundary software. +Customers are expected to maintain their self-managed workers and ensure that they are running the same version as the control plane. + + + + + All self-managed workers must be on the same major and minor version, without any exceptions. + + diff --git a/content/hcp-docs/content/docs/changelog.mdx b/content/hcp-docs/content/docs/changelog.mdx new file mode 100644 index 0000000000..3e68d166e4 --- /dev/null +++ b/content/hcp-docs/content/docs/changelog.mdx @@ -0,0 +1,488 @@ +--- +page_title: Changelog +sidebar_title: Changelog +description: |- + HashiCorp Cloud Platform Changelog. +--- + +# Changelog + +### 2025-10-01 +**HCP Europe now available**: With HCP Europe, your resources are hosted, managed, and billed separately to meet European data residency requirements. For more information, refer to [HCP Europe](/hcp/docs/hcp/europe). + +### 2025-06-30 +**HCP Vault 1.19.5 on AWS and Azure:** Vault 1.19.5 has started rolling out to HCP Vault Dedicated clusters on AWS and Azure. Refer to [1.19.5 Enterprise release notes](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#1195) in GitHub to learn more about what's new in 1.19.5. + +### 2025-06-10 +**HCP Vault 1.18.10 on AWS and Azure:** Vault 1.18.10 has started rolling out to HCP Vault Dedicated clusters on AWS and Azure. Refer to [1.18.10 Enterprise release notes](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#11810) in GitHub to learn more about what's new in 1.18.10. + +### 2025-05-28 +**HCP Vault 1.18.9 on AWS:** Vault 1.18.9 has started rolling out to HCP Vault Dedicated clusters on AWS. Refer to [1.18.9 Enterprise release notes](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#1189-enterprise) in GitHub to learn more about what's new in 1.18.9. + +### 2025-04-29 +**HCP Boundary:** [HCP Boundary custom auth token Time to Live](/hcp/docs/boundary/configure-ttl) can be configured by the administrator. + +### 2025-04-23 +**HCP Vault Dedicated:** [Deleted Clusters on Azure](/hcp/docs/vault/get-started/delete-cluster) can now be restored for up to 30 days after deletion via support ticket. + +**HCP Vault Dedicated:** [HCP Identity-Based Proxy](/hcp/docs/vault/get-started/configure-private-access#hcp-identity-based-proxy) cluster access is now available for Azure clusters. + +### 2025-03-13 +**HCP Vault 1.18.4 on AWS and Azure:** Vault 1.18.4 has started rolling out to HCP Vault Dedicated clusters on AWS and Azure. Refer to [1.18.4 Enterprise release notes](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#1184) in GitHub to learn more about what's new in 1.18.4. + +### 2025-02-18 + +**HCP Audit Log:** [HCP Audit log streaming](/hcp/docs/hcp/audit-log) is now generally available. + +### 2025-01-30 + +**HCP Audit Log:** Audit logs are now available to HCP Vault Dedicated clusters for Add Plugin, Delete Plugin, Is Plugin Registered, Lock, Unlock, Update Version and Restore Snapshot events. + +### 2024-12-11 + +**HCP Audit Log:** Audit logs are now available to HCP Vault Dedicated clusters for Create Cluster, Delete Cluster, Fetch Audit Log, Host Manager Alive and Create Snapshot events. + +### 2024-11-26 + +**HCP Audit Log:** Admin token generation audit logs are now available to HCP Vault Dedicated clusters. + +### 2024-11-25 + +**HCP Vault Secrets:** New sync integration support for [GitLab](/hcp/docs/vault-secrets/integrations/gitlab-sync) is now available. + +### 2024-11-11 + +**HCP Vault 1.18.1 on AWS and Azure:** Vault 1.18.1 has started rolling out to +HCP Vault Dedicated clusters on AWS and Azure. Refer to [1.18.1 Enterprise +release notes](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#1181) in GitHub to +learn more about what's new in 1.18.1. + + + +Workload identity federation (WIF) for HCP Vault Dedicated cluster's auth methods and secrets engines is not currently supported. + + + +### 2024-10-24 + +**HCP Access Management:** The default soft limit that controls the concurrent number of projects within an HCP organization has now been raised to 100. This enables more unique use cases to be managed within an HCP organization and scale with an organization's needs. Refer to [service quotas](/hcp/docs/hcp/admin/support#service-quotas) for more details. + +### 2024-10-15 + +**Vault Radar Public Beta:** [Vault Radar](/hcp/docs/vault-radar) is currently available to be tested by any HCP user. +If you would like to get started, go to your HCP project and claim your Radar instance. Refer to [Vault Radar product tiers](/hcp/docs/vault-radar/get-started/product-tiers) to learn more. + +**HCP Vault 1.15.15 on AWS:** Vault 1.15.15 is now available on HCP for AWS clusters. Refer to [1.15.15 Enterprise](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#11515-enterprise) in GitHub to learn more about what's new in 1.15.15. + +**HCP Vault Secrets:** [Dynamic secrets](/hcp/docs/vault-secrets/dynamic-secrets) for AWS and GCP are now available in public beta. + +### 2024-10-14 + +**HCP Audit Log Streaming**: A public beta of HCP's unified audit log streaming capabilities is now available. You can use a web-based UI workflow to send audit logs for your organization's platform and product events to one of the supported external SIEMs: AWS Cloudwatch, Datadog, or Splunk Cloud. For more information, refer to [HCP audit log streaming](//hcp/docs/hcp/audit-log). + +### 2024-10-05 + +**HCP Access Management:** Project-level service principals can now be assigned access to multiple projects. This enables workflows that need to interact with more than one project at a time with varying levels of permissions. Refer to [documentation](/hcp/docs/hcp/iam/service-principal) for more details. + +### 2024-10-02 + +**HCP Vault 1.15.15 on AWS:** Vault 1.15.15 is now available on HCP for AWS clusters. Refer to [1.15.15 Enterprise](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#11515-enterprise) in GitHub to learn more about what's new in 1.15.15. + +**HCP Vault 1.16.10 on Azure:** Vault 1.16.0 is now available on HCP for Azure clusters. Refer to [1.16.10 Enterprise](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#11610-enterprise) in GitHub to learn more about what's new in 1.16.10. + +### 2024-09-09 + +**HCP Vault 1.16.9 on Azure:** Vault 1.16.9 is now available on HCP for Azure clusters. Refer to [1.16.9 Enterprise](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#1169-enterprise) in GitHub to learn more about what's new in 1.16.9. + +### 2024-09-06 + +**HCP Vault Dedicated:** Clusters can now be configured with a backup network (HVN) for cross-region disaster recovery protection. Refer to [this guide](/vault/tutorials/get-started-hcp-vault-dedicated/vault-ops#enable-cross-region-disaster-recovery) to configure. + +### 2024-08-08 + +**HCP Vault Secrets** New Sync integration & improvements +- [GitHub Sync Improvements - Multi Account support](/hcp/docs/vault-secrets/integrations/github-actions#multi-account-support) +- [HCP Terraform Sync](/hcp/docs/vault-secrets/integrations/hcp-terraform) + +### 2024-07-24 + +HCP Packer now tracks rich CI/CD pipeline metadata. Refer to the following topics for additional information: +- [Rich CI/CD pipeline metadata](/hcp/docs/packer/store#rich-ci-cd-pipeline-metadata) +- [Build pipeline metadata reference](/hcp/docs/packer/reference/build-pipeline-metadata) + +### 2024-05-14 + +**HCP Vault 1.15.8 on AWS:** Vault 1.15.8 is now available on HCP for AWS clusters. Refer to [1.15.8 Enterprise](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#1158-enterprise) in GitHub to learn more about what's new in 1.15.8. + +### 2024-05-13 + +**HCP Vault Secrets**: Enhanced RBAC support with two new roles **App Manager** and **App Secrets Reader** via UI at the [Project level](https://developer.hashicorp.com/hcp/docs/vault-secrets/permissions). Additionally, support for these roles to be applied at the App level via [Terraform Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/vault_secrets_app_iam_binding?product_intent=vault). + +### 2024-05-09 + +**HCP Vault 1.16.2 on Azure:** Vault 1.16.2 is now available on HCP for Azure clusters only. Click [here](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#1162) to learn more about what's new in 1.16.2. + +Notes: +The [Secrets Sync](https://developer.hashicorp.com/vault/docs/sync) feature which went GA in Vault 1.16 will remain disabled on HCP Azure clusters while we work on integrating it with the platform itself. Additionally, the Secrets Sync beta which was available in Vault 1.15 won't be available anymore after the upgrade to 1.16, which is why we're holding off on releasing 1.16 to AWS clusters until the integration work is completed. + +### 2024-05-01 + +**HCP Boundary 0.16.0:** Boundary 0.16.0 is now available on HCP. Click [here](https://github.com/hashicorp/boundary/blob/main/CHANGELOG.md#0160) to learn more about what's new in 0.16.0. + +**HCP unified audit log streaming:** HCP's unified audit log streaming capabilities are now available as a public beta offering. You can use a new Terraform provider resource to send audit logs for platform and product events to one of the following external SIEMs: AWS Cloudwatch, Datadog, Splunk Cloud. For more information, refer to [HCP audit log streaming](/hcp/docs/hcp/security/audit-log). + +### 2024-04-25 + +**HCP Vault Secrets**: Sync integration support for [Azure Key Vault](https://developer.hashicorp.com/hcp/docs/vault-secrets/integrations/azure-key-vault) and [GCP Secret Manager](https://developer.hashicorp.com/hcp/docs/vault-secrets/integrations/gcp-secret-manager) are now generally available on HCP Vault Secrets. + +### 2024-04-15 + +**HCP Boundary 0.15.4:** Boundary 0.15.4 is now available on HCP. Click [here](https://github.com/hashicorp/boundary/blob/main/CHANGELOG.md#0154) to learn more about what's new in 0.15.4. + +In addition, HCP Boundary now supports the ability to manually upgrade clusters within a 30 +day grace period of a new Boundary release. This can be set in the cluster +configuration in [the HCP Portal](https://portal.cloud.hashicorp.com/). After 30 days the upgrade will be performed automatically. + +### 2024-03-21 + +**Vault Radar Limited Availability:** [Vault Radar](/hcp/docs/vault-radar) is currently available to a small set of interested customers. +If you would like to be an early adopter, you must [join this waitlist](https://www.hashicorp.com/go/hcp-vault-radar). + +### 2024-02-06 + +**HCP Vault 1.15.5:** Vault 1.15.5 is now available on HCP. Click [here](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#1155) to learn more about what's new in 1.15.5. + +### 2024-01-30 + +**HCP Packer New Nomenclature:** + +- Renaming of Image Buckets to Bucket: + - The prefix 'Image' has been removed from all the resources. This change simplifies the naming convention and makes it more intuitive for users to identify and manage their resources. +- Renaming of Iteration Resource to Version: + - The 'Iteration' resource has been renamed to 'Version'. This change better reflects the functionality of the resource and aligns with standard terminology used in version control and software development practices. +- Renaming of Image Resource to Artifact: + - The 'Image' resource has been renamed to 'Artifact'. This renaming is part of our ongoing efforts to enhance clarity and consistency in our resource naming. 'Artifact' more accurately represents the nature and usage of these resources in the multi-cloud environment. + +Notes: +These changes are part of our commitment to improving user experience and aligning with industry standards. +Version [0.82.0] of the HCP provider for Terraform reflects these changes. +For any questions or concerns, please reach out to our support team. + +### 2024-01-30 + +**HCP Boundary 0.15.0:** Boundary 0.15.0 is now available on HCP. Click [here](https://github.com/hashicorp/boundary/blob/main/CHANGELOG.md#0150-20240130) to learn more about what's new in 0.15.0. + +### 2024-01-18 + +**Vault Radar Private Beta:** [Vault Radar](/hcp/docs/vault-radar) is in private beta. Currently, users must accept the terms of use to participate in the beta program. + +### 2024-01-16 + +**IP Allow list** is available with **HCP Vault**. It allows you to add specific +IP addresses or CIDR ranges that will be permitted to access the HCP Vault +cluster's public endpoint (if public access is enabled). + +### 2023-12-08 + +**HCP Vault 1.15.4:** Vault 1.15.4 is now available on HCP. Click [here](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#1154) to learn more about what's new in 1.15.4. + +### 2023-11-28 + +**HCP Vault 1.15.2:** Vault 1.15.2 is now available on HCP. Click [here](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#1152) to learn more about what's new in 1.15.2. + +### 2023-11-06 + +**HCP Vault expands observability support:** HCP Vault gains 3 new observability integrations with AWS Cloudwatch, Elasticsearch, and New Relic, as well as a generic HTTP endpoint for flexible audit log and metrics streaming. + +### 2023-11-02 + +**HCP Trial Billing Notifications:** The organization owner and admin users of any HCP account in Trial status (i.e. no credit card added) will receive email notifications when their trial credits are running low ($10 or less) or depleted completely. + +### 2023-10-26 + +**Multiple performance replication secondaries for HCP Vault:** HCP Vault now supports multiple [performance replication](/hcp/docs/vault/perf-replication) secondaries on Plus tier. + +### 2023-10-11 + +**HCP Vault Secrets GA:** [HCP Vault Secrets](/hcp/docs/vault-secrets) is now available on HashiCorp Cloud Platform in general availability. + +### 2023-09-21 + +**HCP Vault 1.14.3:** Vault 1.14.3 is now available on HCP. Click [here](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#1143) to learn more about what's new in 1.14.3. + +### 2023-09-01 + +** HCP Groups:** Introducing HCP Groups that allow the bundling of identities and treating them as one unit while assigning roles and associating with projects. This enables logical user management and clear auditing of permissions. See the [documentation](/hcp/docs/hcp/iam/groups) and other information on how to get started. + +### 2023-08-10 + +**HCP Vault 1.14.1:** Vault 1.14.1 is now available on HCP. Click [here](https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#1141) to learn more about what's new in 1.14.1. + +### 2023-07-27 + +**HCP Packer Audit Logs:** HCP Packer now supports audit logs streaming to Datadog and Amazon CloudWatch. Click [here](https://developer.hashicorp.com/hcp/docs/packer/audit-logs/streaming) to learn more. + +### 2023-07-25 + +**HCP Vault 1.14.0:** Vault 1.14.0 is now available on HCP. Click [here](https://developer.hashicorp.com/vault/docs/release-notes/1.14.0) to learn more about what's new in 1.14.0. + +### 2023-07-17 + +**ADP on HCP Vault Plus Clusters**: Advanced Data Protection (ADP) is now available on HCP Vault Plus clusters at no additional cost. Customers can now use the KMIP, Key Management, and Transform secrets engines that make up the ADP package. + +### 2023-06-13 + +**HCP Vault Secrets Public Beta:** [HCP Vault Secrets](/hcp/docs/vault-secrets) is now available on HashiCorp Cloud Platform in public beta. + +### 2023-05-15 + +**HCP Multi-project Support:** HCP now supports multiple [Projects](/hcp/docs/hcp/admin/projects) within an HCP Organization. Projects allow HCP admins to logically segment out their HashiCorp services within an Organization by team, environment, or use case with Project-level RBAC policies to ensure least-privilege access. All new and existing Orgs have a default project created. + +### 2023-05-01 + +**HCP Vault Oracle Plugin Support:** The [Oracle Database Secrets Engine](/vault/docs/secrets/databases/oracle) is now supported in HCP Vault on AWS. + +### 2023-04-25 + +**HCP Vault Azure Plus Tier General Availability:** Plus tier is now generally available on Azure in HCP Vault. Plus tier supports performance replication and Sentinel policies, in addition to all existing Standard tier functionality available for [HCP Vault on Azure](/hcp/docs/vault#hcp-vault-on-azure) + +### 2023-03-08 + +**HCP Packer channel assignment history and rollback**: View the history of channel assignment activity and automatically roll back to the previously assigned iteration when revoking a currently published image. + +### 2023-03-06 + +**HCP Consul IP Allow list:** This feature improves the security of your HCP Consul deployment by allowing access to Consul server, UI, and API only from allowlisted IP CIDRs. + +### 2023-03-02 + +**HCP Consul Management Plane Service General Availability:** This service is now generally available. With this service, users can get global visibility and control on both their self-managed and HashiCorp-managed Consul clusters. + +### 2023-02-28 + +**HCP Vault General Availability on Azure:** HCP Vault is now generally available on Azure. HCP Vault gives you the power and security of HashiCorp Vault as a managed service. [Read more](https://www.hashicorp.com/blog/hcp-vault-on-microsoft-azure-is-now-generally-available) + +**Additional Azure Regions:** HashiCorp Cloud Platform users can deploy HCP Consul and HCP Vault in four new Azure regions - Canada Central (canadacentral), South East Asia (southeastasia) ; Japan East (japaneast) ; Australia SouthEast (australiasoutheast) + +### 2023-02-08 + +HashiCorp Cloud Platform users can deploy HCP Consul and HCP Vault in three new AWS regions - Tokyo, Japan (ap-northeast-1) ; Montreal, Canada (ca-central-1) ; Ohio, US (us-east-2) + +### 2023-02-08 + +**HCP Vault Beta Updates**: Users can now manage their beta clusters by being able to manage snapshots, scale between tiers, managing major version upgrades settings, as well as audit log & metrics streaming. + +### 2023-02-06 + +**Sentinel on HCP Vault Plus Clusters**: Sentinel is now avaliable on HCP Vault Plus clusters at no additional cost. + +### 2023-01-11 + +**HCP Consul Management Plane Service Public Beta**: This service is now available publicly. With this service, users can get global visibility and control on both their self managed and Hashicorp managed Consul clusters. + +### 2022-12-15 + +**Login with HCP to Vagrant Cloud**: Users can now link their HCP accounts to log into [Vagrant Cloud](https://app.vagrantup.com/), or create new accounts using HCP. For more details on this and our HCP strategy for Vagrant, click [here](https://discuss.hashicorp.com/t/adding-hcp-login-for-vagrant-cloud). + +### 2022-12-15 + +**HCP Vault Azure Beta Production Tiers**: Users can now create 3-node clusters as part of the Starter and Standard tiers on Azure. [Read more](/hcp/docs/vault/azure-public-beta#available-functionality) + +### 2022-12-05 + +**HCP Guided Flow for Peering Connections:** Users can use a wizard to guide them through the multiple steps to [peer their AWS VPC](/hcp/tutorials/networking/amazon-peering-hcp) to the HashiCorp Virtual Network. + +### 2022-10-25 + +**HCP Packer Ancestry:** Users can now track [image ancestry](/hcp/docs/packer/manage-image-use/ancestry) in HCP Packer to trace changes and vulnerabilities from a source image to all of its descendant images, as well as revoke all descendants through [inherited revocation](/hcp/docs/packer/manage-image-use/revoke-images#inherited-revocation). + +### 2022-09-27 + +**HCP Consul 1.13.2:** Consul 1.13.2 is now available on HCP. Click [here](https://github.com/hashicorp/consul/blob/main/CHANGELOG.md#1132-september-20-2022) to learn more about what's new in 1.13.2. Click [here](https://github.com/hashicorp/consul/blob/main/CHANGELOG.md#1130-august-9-2022) to read the full 1.13.0 changelog. + +### 2022-09-09 + +**Single step automated peering of HVNs to AWS VPCs:** Users can click a single button to peer HashiCorp Virtual Network to their AWS VPCs. + +### 2022-08-29 + +**Automated peering of HashiCorp Virtual Networks to AWS VPCs:** Users can leverage automation to peer HashiCorp Virtual Network to their AWS VPCs, which provides an alternative to manual peering, and reduces the peering time from 30-60 minutes to about 5 minutes. + +### 2022-08-11 + +**Platform Logs (HCP Consul):** Platform Logs are auditable events across HCP. This will enable better tracking of changes within a customer's HCP Organization and meet troubleshooting and compliance needs. This release covers HCP Consul and tracks activity like cluster creation, upgrades and snapshots. These logs can be seen when navigating to a particular Consul cluster within [the HCP Portal](https://portal.cloud.hashicorp.com/). + +### 2022-07-26 + +**HCP Consul General Availability on Azure:** HCP Consul is now generally available on Azure. HCP Consul is a fully managed service mesh to discover and securely connect any service. [Read more](https://www.hashicorp.com/blog/hcp-consul-on-azure) + +### 2022-06-30 + +**HCP Packer no longer redacts the image identifier for revoked iterations:** The image identifier `image_id` will no longer be replaced with `error_revoked` for revoked iterations. Packer will continue to error when building new images from iterations that are revoked. Terraform will not error but users can validate iterations in their Terraform configurations to prevent new deployments of images from revoked iterations. [Read more about validating iterations in Terraform](/hcp/docs/packer/reference-image-metadata#validate-iterations-in-terraform-configurations) + +~> Note: This is a breaking change for Terraform Configurations that depends on `error_revoked` to validate iterations. + +### 2022-06-07 + +**HCP Consul on Azure Public Beta:** Consul on Azure is now available on HashiCorp Cloud Platform in public beta. [Read more](https://www.hashicorp.com/blog/hcp-consul-on-azure) + +### 2022-06-07 + +**HCP Consul cluster creation experience redesign:** We have streamlined the cluster creation experience to allow users to configure their clusters using automated or manual workflows. + +### 2022-05-05 + +**HashiCorp Status site redesign:** The HashiCorp Status site, [https://status.hashicorp.com](https://status.hashicorp.com), has been redesigned to better meet the needs of the various customers relying on our services. We have streamlined the layout to make it easier to find services and understand how an incident is impacting them. Historical information is still preserved for customers interested in past incidents. + +### 2022-04-06 + +**HCP Terraform Provider 0.25.0:** Version 0.25.0 of the HCP Terraform Provider is now available. [Read more](https://github.com/hashicorp/terraform-provider-hcp/releases/tag/v0.25.0) + +- What's New: + - Users can now scale plus-tier clusters and use path filtering + +### 2022-03-29 + +**HCP Vault Plus Paths Filters and Cluster Resizing:** Users can now create deny paths filters for performance replicas. Users can also resize their Plus clusters and secondaries in-place. + +### 2022-02-22 + +**HCP Consul Audit Log Download:** Users can now download audit logs from all HCP Consul cluster tiers except “Development”. [Read more](/hcp/docs/consul/monitor/audit-logs) + +**HCP Vault Plus Configuration:** Users can create a new HCP Vault configuration: The Plus tier includes all of the functionality in the Standard tier and adds the ability to create multi-region performance replicas. The Plus tier has three size offerings (S, M, L), and unlimited clients. Read replicas will include: secrets, policies, secrets backend config, auth backends config, audit backends config, and batch tokens. [Read more](https://www.hashicorp.com/blog/multi-region-replication-now-available-with-hcp-vault) + +### 2021-12-16 + +**Remove Credit Card:** Users can now remove their credit card on the HCP portal billing page. The account must be in good standing and cannot run production-tier resources without a payment method. + +### 2021-12-08 + +**HCP Vault Cluster Scaling Now Available:** HCP Vault clusters can now be modified in-place based on tier (Dev, Starter, Standard) or size (Standard S, M, L). + +### 2021-10-14 + +**HCP Vault and HCP Consul Available in Singapore and Sydney AWS Regions:** HCP Vault and HCP Consul clusters can now be deployed in Singapore and Sydney AWS regions. [Read more](/hcp/docs/hcp/supported-env/aws) + +### 2021-09-14 + +**HCP Vault Admin Token Entity Policy Change:** Users of HCP Vault admin tokens can now avoid unchecked client counts due to them not being associated with a Vault identity. This change attaches an entity to admin tokens, which results in the maximum admin-token associated client count for a cluster being 1 per month. + +### 2021-08-24 + +**Multi-Factor Authentication Configuration:** Users can now disable MFA for their HCP accounts. [Read more](/hcp/docs/hcp/admin/mfa#disabling-mfa) + +### 2021-08-09 + +**HCP Vault 1.8.0 (New Clusters Only):** Users can now use Vault 1.8.0 for new Vault clusters. [Read more about Vault 1.8.0](/vault/docs/release-notes/1.8.0) + +**HVN-HVN Peering:** It is now possible to peer two HVNs across regions within AWS. The Peering can be created automatically through the HCP Consul Federation create flow or manually through the HCP Terraform Provider. + +### 2021-08-04 + +**HCP Terraform Provider 0.12.0:** Version 0.12.0 of the HCP Terraform Provider is now available. [Read more](https://github.com/hashicorp/terraform-provider-hcp/releases/tag/v0.12.0) + +- What's New: + - HCP Vault: A new configuration is now available in the provider: `starter_small` + +### 2021-08-02 + +**HCP Vault Starter Configuration:** Users can create a new HCP Vault configuration: The Starter configuration provides a production-grade cluster that balances predictable pricing, performance, and cost. [Read more](https://www.hashicorp.com/blog/hcp-vault-starter) + +- Specs/features: + - 2 vCPU, 8 GiB RAM + - 5 GB storage, 250 GB for snapshots and audit logs (soft limits) + - 25 included clients + - Audit logs + - Snapshots and restores + - [Bronze tier Cloud support](https://www.hashicorp.com/customer-success/cloud-support) + +### 2021-07-30 + +**Multi-Factor Authentication:** Users can now enable MFA from within their own HCP account. [Read more](/hcp/docs/hcp/admin/mfa) + +**HCP Consul Plus Configuration with Federation:** HCP Consul Plus allows users to federate Consul clusters across multiple regions for improved redundancy and resiliency of applications. This provides a simple and secure way for users to implement a multi-region service mesh in AWS. [Read more](https://www.hashicorp.com/blog/announcing-hcp-consul-plus) + +### 2021-07-20 + +**HCP Vault Resource Quotas:** To help maintain the health of the fleet, the following resource limits are now in place: + +- Added cgroup-based resource limits for HCP Vault clusters +- Added Vault API resource limits (requests/second) + +### 2021-07-16 + +**HCP Terraform Provider 0.10.0:** Version 0.10.0 of the HCP Terraform Provider is now available. [Read more](https://github.com/hashicorp/terraform-provider-hcp/releases/tag/v0.10.0) + +- What's New: + - HCP Consul: Fixed an issue with updating the version of Consul + +### 2021-07-12 + +**Organization Rename:** Users can now rename their HCP organization by navigating to the Org Management page, or by clicking `Settings` on the left Navbar, then `Manage`, and then `Edit name`. + +### 2021-07-06 + +**Terraform Landing Page:** [the HCP Portal](https://portal.cloud.hashicorp.com/) now includes a landing page for both TFC and the HCP Terraform Provider. It can be accessed on the main left-hand navigation under `Consul` and `Vault`. + +### 2021-06-30 + +**HCP Terraform Provider 0.9.0:** Version 0.9.0 of the HCP Terraform Provider is now available. [Read more](https://github.com/hashicorp/terraform-provider-hcp/releases/tag/v0.9.0) + +- What's New: + - HCP Consul: Users can now specify if auto peering should happen with `auto_hvn_to_hvn_peering` + - HCP Vault: Users can now update `public_endpoint` without having to recreate the cluster + +### 2021-06-18 + +**HCP Terraform Provider 0.8.0:** Version 0.8.0 of the HCP Terraform Provider is now available. [Read more](https://github.com/hashicorp/terraform-provider-hcp/releases/tag/v0.8.0) + +- What's New: + - HCP Consul: A new configuration is now available in the provider: `plus` + - HCP Vault: New configurations are now available in the provider: `standard_small`, `standard_medium`, `standard_large` + +**UI Scalable Navigation:** The main headers and general approach to navigation within the portal has been significantly updated to improve usability. + +### 2021-06-17 + +**User Profile Page:** The HCP Profile Page now shows your HashiCorp ID, and offers the ability to reset your password. + +### 2021-06-07 + +**HCP Terraform Provider 0.7.0:** Version 0.7.0 of the HCP Terraform Provider is now available. [Read more](https://github.com/hashicorp/terraform-provider-hcp/releases/tag/v0.7.0) + +- What's New: + - Users can now manage HVN Routes from the provider + +~> **Note:** This version contains breaking changes to the `hcp_aws_transit_gateway_attachment` and `hcp_aws_network_peering` resources and data sources. Please pin to the previous version and follow [this migration guide](https://github.com/hashicorp/terraform-provider-hcp/pull/128) when you're ready to migrate. + +### 2021-05-17 + +**AWS Transit Gateway Attachments:** As you grow your HCP footprint, you’ll need more elegant ways to simplify networking at scale. That’s why we introduced support for transit gateway attachments in HCP. Transit gateways enable a “hub-and-spoke” configuration of your networks, a simpler and more secure option compared to the complexity of managing separate virtual private cloud (VPC) connections. Instead of establishing a VPC peering connection for every environment, you can create an “attachment” to a transit gateway. The transit gateway then manages these connections centrally. With a transit gateway, you can secure a single ingress/egress point, instead of monitoring multiple peering connections. To get started, [read this tutorial](/hcp/tutorials/networking/amazon-transit-gateway) and [watch webinar](https://www.hashicorp.com/events/webinars/connecting-hcp-consul-with-aws-transit-gateways). + +**HVN Routes (Web Portal Only):** Users now have a single place to view/edit all of the connections associated with HVN’s. This includes the option for the user to view routes for BOTH peerings and TGW attachments. There are options to view/add/delete HVN routes through the UI. [Read more](/hcp/docs/hcp/network/hvn-aws/routes) + +### 2021-05-03 + +**Single Sign-On via Okta:** HCP allows organizations to configure SAML 2.0 SSO (Single Sign-On) as an alternative to traditional user management with GitHub and email-based options. This can help mitigate Account Take Over (ATO) attacks, provide a universal source of truth to federate identities from your identity provider (IDP), and better manage user access to your organization. At this time, HCP integrates with Okta as an identity provider, with others planned. [Read more](/hcp/docs/hcp/iam/sso) + +### 2021-04-07 + +**New Sizing Options for HCP Consul Standard Tier:** Users have additional sizing options when deploying a Consul Standard cluster. Medium and Large VM sizes are now available in HCP. [See pricing](https://cloud.hashicorp.com/pricing/consul) + +**HCP Vault Generally Availability on AWS:** HCP Vault gives you the power and security of HashiCorp Vault as a managed service. [Read more](https://www.hashicorp.com/blog/vault-on-the-hashicorp-cloud-platform-ga) + +### 2021-04-06 + +**Free Credits Expanded:** New users now have $50 in credits for use on HCP. [Sign up](/hcp) + +### 2021-03-09 + +**Email/Password Authentication:** Users can now login and authenticate using email/password, in addition to GitHub. [Sign up](/hcp) + +### 2021-02-02 + +**HCP Consul General Availability:** HCP Consul is now generally available on AWS. HCP Consul is a fully managed service mesh to discover and securely connect any service. [Read more](https://www.hashicorp.com/blog/announcing-hcp-consul-general-availability) + +### 2021-01-14 + +**HCP Vault Public Beta:** HashiCorp Vault is now available on HashiCorp Cloud Platform in public beta. [Read more](https://www.hashicorp.com/blog/vault-on-the-hashicorp-cloud-platform-public-beta) diff --git a/content/hcp-docs/content/docs/cli/commands/auth/index.mdx b/content/hcp-docs/content/docs/cli/commands/auth/index.mdx new file mode 100644 index 0000000000..456c7be832 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/auth/index.mdx @@ -0,0 +1,44 @@ +--- +page_title: hcp auth +description: |- + The "hcp auth" command lets you authenticate to HCP. +--- + +# hcp auth + +Command: `hcp auth` + +The `hcp auth` command group lets you manage authentication to HCP. + +## Usage + +```shell-session +$ hcp auth [Optional Flags] +``` + +## Examples + +Login interactively using a browser: + +```shell-session +$ hcp auth login +``` + +Login using service principal credentials: + +```shell-session +$ hcp auth login --client-id=spID --client-secret=spSecret +``` + +Logout the CLI: + +```shell-session +$ hcp auth logout +``` + +## Commands + +- [`login`](/hcp/docs/cli/commands/auth/login) - Login to HCP. +- [`logout`](/hcp/docs/cli/commands/auth/logout) - Logout from HCP. +- [`print-access-token`](/hcp/docs/cli/commands/auth/print-access-token) - Print the access token for the authenticated account. + diff --git a/content/hcp-docs/content/docs/cli/commands/auth/login.mdx b/content/hcp-docs/content/docs/cli/commands/auth/login.mdx new file mode 100644 index 0000000000..1e8a3ce8e6 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/auth/login.mdx @@ -0,0 +1,60 @@ +--- +page_title: hcp auth login +description: |- + The "hcp auth login" command lets you login to HCP. +--- + +# hcp auth login + +Command: `hcp auth login` + +The `hcp auth login` command lets you login to authenticate to HCP. + +If no arguments are provided, authentication occurs for your user principal by +initiating a web browser login flow. + +To authenticate non-interactively, you may authenticate as a service principal. +To do so, use the `--client-id` and `--client-secret` flags. A service principal +may be created using `hcp iam service-principals create` or via the [HCP +Portal](https://portal.cloud.hashicorp.com). + +If authenticating a workload using a Workload Identity Provider, a credential +file may be used to authenticate by passing the Path to the credential file +using `--cred-file`. The command should be running in the environment that +Workload Identity was previously configured to be able to retrieve and federate +external credentials from. + +## Usage + +```shell-session +$ hcp auth login [Optional Flags] +``` + +## Examples + +Login interactively using a browser: + +```shell-session +$ hcp auth login +``` + +Login using service principal credentials: + +```shell-session +$ hcp auth login --client-id=spID --client-secret=spSecret +``` + +Login using Workload Identity credentials: + +```shell-session +$ hcp auth login --cred-file=workload_cred_file.json +``` + +## Flags + +- `--client-id=ID` - Service principal Client ID used to authenticate as the given service principal. + +- `--client-secret=SECRET` - Service principal Client Secret used to authenticate as the given service principal. + +- `--cred-file=PATH` - Path to the credential file used for workload identity federation (generated by `hcp iam workload-identity-providers create-cred-file`) or service account credential key file. + diff --git a/content/hcp-docs/content/docs/cli/commands/auth/logout.mdx b/content/hcp-docs/content/docs/cli/commands/auth/logout.mdx new file mode 100644 index 0000000000..1da2c6c767 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/auth/logout.mdx @@ -0,0 +1,26 @@ +--- +page_title: hcp auth logout +description: |- + The "hcp auth logout" command lets you logout from HCP. +--- + +# hcp auth logout + +Command: `hcp auth logout` + +The `hcp auth logout` command logs out to remove access to HCP. + +## Usage + +```shell-session +$ hcp auth logout [Optional Flags] +``` + +## Examples + +Logout the HCP CLI: + +```shell-session +$ hcp auth logout +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/auth/print-access-token.mdx b/content/hcp-docs/content/docs/cli/commands/auth/print-access-token.mdx new file mode 100644 index 0000000000..288a0d5aa7 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/auth/print-access-token.mdx @@ -0,0 +1,37 @@ +--- +page_title: hcp auth print-access-token +description: |- + The "hcp auth print-access-token" command lets you print the access token for the authenticated account. +--- + +# hcp auth print-access-token + +Command: `hcp auth print-access-token` + +The `hcp auth print-access-token` command prints an access token for the +currently authenticated account. + +The output of this command can be used to set the `Authorization: Bearer +` HTTP header when manually making API requests. + +## Usage + +```shell-session +$ hcp auth print-access-token [Optional Flags] +``` + +## Examples + +To print the access token: + +```shell-session +$ hcp auth print-access-token +``` + +To use the access token when curling an API: + +```shell-session +$ curl https://api.cloud.hashicorp.com/iam/2019-12-10/caller-identity \ + --header "Authorization: Bearer $(hcp auth print-access-token)" +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/groups/create.mdx b/content/hcp-docs/content/docs/cli/commands/iam/groups/create.mdx new file mode 100644 index 0000000000..bcbb479544 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/groups/create.mdx @@ -0,0 +1,49 @@ +--- +page_title: hcp iam groups create +description: |- + The "hcp iam groups create" command lets you create a new group. +--- + +# hcp iam groups create + +Command: `hcp iam groups create` + +The `hcp iam groups create` command creates a new group. + +Once a group is created, membership can be managed using the `hcp iam groups +members` command group. + +## Usage + +```shell-session +$ hcp iam groups create GROUP_NAME [Optional Flags] +``` + +## Examples + +Create a new group for the platform engineering team: + +```shell-session +$ hcp iam groups create team-platform \ + --description "Team Platform engineering group" +``` + +Create a new group and specify the initial members: + +```shell-session +$ hcp iam groups create team-platform \ + --description "Team Platform engineering group" \ + --member=7f8a81b2-1320-4e49-a2e5-44f628ec74c3 \ + --member=f74f44b9-414a-409e-a257-72805d2c067b +``` + +## Positional arguments + +- `GROUP_NAME` - The name of the group to create. + +## Flags + +- `--description=DESCRIPTION` - An optional description for the group. + +- `--member=ID [Repeatable]` - The ID of the principal to add to the group. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/groups/delete.mdx b/content/hcp-docs/content/docs/cli/commands/iam/groups/delete.mdx new file mode 100644 index 0000000000..8ae17b30d8 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/groups/delete.mdx @@ -0,0 +1,43 @@ +--- +page_title: hcp iam groups delete +description: |- + The "hcp iam groups delete" command lets you delete a group. +--- + +# hcp iam groups delete + +Command: `hcp iam groups delete` + +The `hcp iam groups delete` command deletes a group. + +Once the group is deleted, all permissions granted to members based on group +membership will also be revoked. + +## Usage + +```shell-session +$ hcp iam groups delete GROUP_NAME [Optional Flags] +``` + +## Examples + +Delete a group using its name suffix: + +```shell-session +$ hcp iam groups delete team-platform +``` + +Delete a group using its resource name: + +```shell-session +$ hcp iam groups delete iam/organization/example-org/group/team-platform +``` + +## Positional arguments + +- `GROUP_NAME` - The name of the group to delete. The name may be specified as either: + + * The group's resource name. Formatted as + `iam/organization/ORG_ID/group/GROUP_NAME` + * The resource name suffix, `GROUP_NAME`. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/groups/iam/add-binding.mdx b/content/hcp-docs/content/docs/cli/commands/iam/groups/iam/add-binding.mdx new file mode 100644 index 0000000000..e9c43e4b39 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/groups/iam/add-binding.mdx @@ -0,0 +1,48 @@ +--- +page_title: hcp iam groups iam add-binding +description: |- + The "hcp iam groups iam add-binding" command lets you add an IAM policy binding for a group. +--- + +# hcp iam groups iam add-binding + +Command: `hcp iam groups iam add-binding` + +The `hcp iam groups iam add-binding` command adds an IAM policy binding for the +given group. A binding grants the specified principal the given role on the +group. + +To view the available roles to bind, run `hcp iam roles list`. + +Currently, the only supported role on a principal in a group is +`roles/iam.group-manager`. + +A group manager can add/remove members from the group and update the group +name/description. + +## Usage + +```shell-session +$ hcp iam groups iam add-binding --group=NAME --member=PRINCIPAL_ID --role=ROLE_ID + [Optional Flags] +``` + +## Examples + +Bind a principal to role `roles/iam.group-manager`: + +```shell-session +$ hcp iam groups iam add-binding \ + --group=Group-Name \ + --member=ef938a22-09cf-4be9-b4d0-1f4587f80f53 \ + --role=roles/iam.group-manager +``` + +## Required flags + +- `-g, --group=NAME` - The name of the group to add the role binding to. + +- `-m, --member=PRINCIPAL_ID` - The ID of the principal to add the role binding to. + +- `-r, --role=ROLE_ID` - The role ID (e.g. "roles/iam.group-manager") to bind the member to. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/groups/iam/delete-binding.mdx b/content/hcp-docs/content/docs/cli/commands/iam/groups/iam/delete-binding.mdx new file mode 100644 index 0000000000..18ed790f52 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/groups/iam/delete-binding.mdx @@ -0,0 +1,42 @@ +--- +page_title: hcp iam groups iam delete-binding +description: |- + The "hcp iam groups iam delete-binding" command lets you delete an IAM policy binding for a group. +--- + +# hcp iam groups iam delete-binding + +Command: `hcp iam groups iam delete-binding` + +The `hcp iam groups iam delete-binding` command deletes an IAM policy binding +for the given group. A binding consists of a principal and a role. + +To view the existing role bindings, run `hcp iam groups iam read-policy`. + +## Usage + +```shell-session +$ hcp iam groups iam delete-binding --group=NAME --member=PRINCIPAL_ID + --role=ROLE_ID [Optional Flags] +``` + +## Examples + +Delete a role binding for a principal's previously granted role +`roles/iam.group-manager`: + +```shell-session +$ hcp iam groups iam delete-binding \ + --group=Group-Name \ + --member=ef938a22-09cf-4be9-b4d0-1f4587f80f53 \ + --role=roles/iam.group-manager +``` + +## Required flags + +- `-g, --group=NAME` - The name of the group to remove the role binding from. + +- `-m, --member=PRINCIPAL_ID` - The ID of the principal to remove the role binding from. + +- `-r, --role=ROLE_ID` - The role ID (e.g. "roles/admin", "roles/contributor", "roles/viewer") to remove the member from. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/groups/iam/index.mdx b/content/hcp-docs/content/docs/cli/commands/iam/groups/iam/index.mdx new file mode 100644 index 0000000000..5b2883a472 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/groups/iam/index.mdx @@ -0,0 +1,37 @@ +--- +page_title: hcp iam groups iam +description: |- + The "hcp iam groups iam" command lets you manage a group's IAM policy. +--- + +# hcp iam groups iam + +Command: `hcp iam groups iam` + +The `hcp iam groups iam` command group lets you manage a group's IAM Policy. + +## Usage + +```shell-session +$ hcp iam groups iam [Optional Flags] +``` + +## Examples + +To set a member as a group manager, you can use the `add-binding` subcommand +with the `roles/iam.group-manager` role: + +```shell-session +$ hcp iam groups iam add-binding \ + --group=Group-Name \ + --member=ef938a22-09cf-4be9-b4d0-1f4587f80f53 \ + --role=roles/iam.group-manager +``` + +## Commands + +- [`add-binding`](/hcp/docs/cli/commands/iam/groups/iam/add-binding) - Add an IAM policy binding for a group. +- [`delete-binding`](/hcp/docs/cli/commands/iam/groups/iam/delete-binding) - Delete an IAM policy binding for a group. +- [`read-policy`](/hcp/docs/cli/commands/iam/groups/iam/read-policy) - Read the IAM policy for a group. +- [`set-policy`](/hcp/docs/cli/commands/iam/groups/iam/set-policy) - Set the IAM policy for a group. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/groups/iam/read-policy.mdx b/content/hcp-docs/content/docs/cli/commands/iam/groups/iam/read-policy.mdx new file mode 100644 index 0000000000..6de314b6be --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/groups/iam/read-policy.mdx @@ -0,0 +1,36 @@ +--- +page_title: hcp iam groups iam read-policy +description: |- + The "hcp iam groups iam read-policy" command lets you read the IAM policy for a group. +--- + +# hcp iam groups iam read-policy + +Command: `hcp iam groups iam read-policy` + +The `hcp iam groups iam read-policy` command reads the IAM policy for a group. + +## Usage + +```shell-session +$ hcp iam groups iam read-policy [Optional Flags] +``` + +## Examples + +Read the IAM Policy for a group: + +```shell-session +$ hcp iam groups iam read-policy \ + --group=iam/organization/cf8ef907-b9b9-4f2f-b675-e290448f0000/group/Group-Name +``` + +## Flags + +- `-g, --group=NAME` - The name of the group to read the IAM policy for. The name may be specified as + either: + + * The group's resource name. Formatted as + `iam/organization/ORG_ID/group/GROUP_NAME` + * The resource name suffix, `GROUP_NAME`. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/groups/iam/set-policy.mdx b/content/hcp-docs/content/docs/cli/commands/iam/groups/iam/set-policy.mdx new file mode 100644 index 0000000000..ca37a78b97 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/groups/iam/set-policy.mdx @@ -0,0 +1,82 @@ +--- +page_title: hcp iam groups iam set-policy +description: |- + The "hcp iam groups iam set-policy" command lets you set the IAM policy for a group. +--- + +# hcp iam groups iam set-policy + +Command: `hcp iam groups iam set-policy` + +The `hcp iam groups iam set-policy` command sets the IAM policy for the group, +given a group name and a file encoded in JSON that contains the IAM policy. If +adding or removing a single principal from the policy, prefer using `hcp iam +groups iam add-binding` and the related `hcp iam groups iam delete-binding`. + +The policy file is expected to be a file encoded in JSON that contains the IAM +policy. + +The format for the policy JSON file is an object with the following format: + +```json +{ + "bindings": [ + { + "role_id": "ROLE_ID", + "members": [ + { + "member_id": "PRINCIPAL_ID", + "member_type": "USER" + } + ] + } + ], + "etag": "ETAG" +} +``` + +If set, the etag of the policy must be equal to that of the existing policy. To +view the existing policy and its etag, run `hcp iam groups iam read-policy +--format=json`. If unset, the existing policy's etag will be fetched and used. + +Note that the only supported member_type is `USER` and the only supported +role_id is `roles/iam.group-manager`". + +## Usage + +```shell-session +$ hcp iam groups iam set-policy --group=NAME --policy-file=PATH [Optional Flags] +``` + +## Examples + +Set the IAM Policy for a group: + +```shell-session +$ cat >policy.json < [Optional Flags] +``` + +## Command groups + +- [`members`](/hcp/docs/cli/commands/iam/groups/members) - Manage group membership. +- [`iam`](/hcp/docs/cli/commands/iam/groups/iam) - Manage a group's IAM policy. + +## Commands + +- [`list`](/hcp/docs/cli/commands/iam/groups/list) - List the organization's groups. +- [`read`](/hcp/docs/cli/commands/iam/groups/read) - Show metadata for the given group. +- [`create`](/hcp/docs/cli/commands/iam/groups/create) - Create a new group. +- [`delete`](/hcp/docs/cli/commands/iam/groups/delete) - Delete a group. +- [`update`](/hcp/docs/cli/commands/iam/groups/update) - Update an existing group. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/groups/list.mdx b/content/hcp-docs/content/docs/cli/commands/iam/groups/list.mdx new file mode 100644 index 0000000000..6e4e82102e --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/groups/list.mdx @@ -0,0 +1,18 @@ +--- +page_title: hcp iam groups list +description: |- + The "hcp iam groups list" command lets you list the organization's groups. +--- + +# hcp iam groups list + +Command: `hcp iam groups list` + +The `hcp iam groups list` command lists the groups for an HCP organization. + +## Usage + +```shell-session +$ hcp iam groups list [Optional Flags] +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/groups/members/add.mdx b/content/hcp-docs/content/docs/cli/commands/iam/groups/members/add.mdx new file mode 100644 index 0000000000..3e73c8d37d --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/groups/members/add.mdx @@ -0,0 +1,40 @@ +--- +page_title: hcp iam groups members add +description: |- + The "hcp iam groups members add" command lets you add members to a group. +--- + +# hcp iam groups members add + +Command: `hcp iam groups members add` + +The `hcp iam groups members add` command adds members to a group. + +All added members will inherit any roles that have been granted to the group. + +## Usage + +```shell-session +$ hcp iam groups members add [Optional Flags] +``` + +## Examples + +Add members to the "platform-team": + +```shell-session +$ hcp iam groups members add --group=team-platform \ + --member=7f8a81b2-1320-4e49-a2e5-44f628ec74c3 \ + --member=f74f44b9-414a-409e-a257-72805d2c067b +``` + +## Flags + +- `-g, --group=NAME` - The name of the group to add a member to. The name may be specified as either: + + * The group's resource name. Formatted as + `iam/organization/ORG_ID/group/GROUP_NAME` + * The resource name suffix, `GROUP_NAME`. + +- `-m, --member=ID [Repeatable]` - The ID of the user principal to add to the group. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/groups/members/delete.mdx b/content/hcp-docs/content/docs/cli/commands/iam/groups/members/delete.mdx new file mode 100644 index 0000000000..71644e1f8a --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/groups/members/delete.mdx @@ -0,0 +1,42 @@ +--- +page_title: hcp iam groups members delete +description: |- + The "hcp iam groups members delete" command lets you delete a membership from a group. +--- + +# hcp iam groups members delete + +Command: `hcp iam groups members delete` + +The `hcp iam groups members delete` command deletes a membership from a group. + +All members that are deleted will no longer inherit any roles that have been +granted to the group. + +## Usage + +```shell-session +$ hcp iam groups members delete [Optional Flags] +``` + +## Examples + +Delete members from the "platform-team": + +```shell-session +$ hcp iam groups members delete --group=team-platform \ + --member=7f8a81b2-1320-4e49-a2e5-44f628ec74c3 \ + --member=f74f44b9-414a-409e-a257-72805d2c067b +``` + +## Flags + +- `-g, --group=NAME` - The name of the group to delete a membership from. The name may be specified as + either: + + * The group's resource name. Formatted as + `iam/organization/ORG_ID/group/GROUP_NAME` + * The resource name suffix, `GROUP_NAME`. + +- `-m, --member=ID [Repeatable]` - The ID of the user principal to remove membership from the group. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/groups/members/index.mdx b/content/hcp-docs/content/docs/cli/commands/iam/groups/members/index.mdx new file mode 100644 index 0000000000..05b193ab35 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/groups/members/index.mdx @@ -0,0 +1,24 @@ +--- +page_title: hcp iam groups members +description: |- + The "hcp iam groups members" command lets you manage group membership. +--- + +# hcp iam groups members + +Command: `hcp iam groups members` + +The `hcp iam groups members` command group lets you manage group membership. + +## Usage + +```shell-session +$ hcp iam groups members [Optional Flags] +``` + +## Commands + +- [`list`](/hcp/docs/cli/commands/iam/groups/members/list) - List the members of a group. +- [`add`](/hcp/docs/cli/commands/iam/groups/members/add) - Add members to a group. +- [`delete`](/hcp/docs/cli/commands/iam/groups/members/delete) - Delete a membership from a group. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/groups/members/list.mdx b/content/hcp-docs/content/docs/cli/commands/iam/groups/members/list.mdx new file mode 100644 index 0000000000..9be9358181 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/groups/members/list.mdx @@ -0,0 +1,36 @@ +--- +page_title: hcp iam groups members list +description: |- + The "hcp iam groups members list" command lets you list the members of a group. +--- + +# hcp iam groups members list + +Command: `hcp iam groups members list` + +The `hcp iam groups members list` command lists the members of a group. + +## Usage + +```shell-session +$ hcp iam groups members list [Optional Flags] +``` + +## Examples + +List the members of "team-platform": + +```shell-session +$ hcp iam groups members list --group=team-platform \ + --description "Team Platform engineering group" +``` + +## Flags + +- `-g, --group=NAME` - The name of the group to list membership from. The name may be specified as + either: + + * The group's resource name. Formatted as + `iam/organization/ORG_ID/group/GROUP_NAME` + * The resource name suffix, `GROUP_NAME`. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/groups/read.mdx b/content/hcp-docs/content/docs/cli/commands/iam/groups/read.mdx new file mode 100644 index 0000000000..b03fb202ab --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/groups/read.mdx @@ -0,0 +1,40 @@ +--- +page_title: hcp iam groups read +description: |- + The "hcp iam groups read" command lets you show metadata for the given group. +--- + +# hcp iam groups read + +Command: `hcp iam groups read` + +The `hcp iam groups read` command reads details about the given group. + +## Usage + +```shell-session +$ hcp iam groups read GROUP_NAME [Optional Flags] +``` + +## Examples + +Read the group using the resource name suffix "example-group": + +```shell-session +$ hcp iam groups read example-group +``` + +Read the group using the group's resource name: + +```shell-session +$ hcp iam groups read iam/organization/example-org/group/example-group +``` + +## Positional arguments + +- `GROUP_NAME` - The name of the group to read. The name may be specified as either: + + * The group's resource name. Formatted as + `iam/organization/ORG_ID/group/GROUP_NAME` + * The resource name suffix, `GROUP_NAME`. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/groups/update.mdx b/content/hcp-docs/content/docs/cli/commands/iam/groups/update.mdx new file mode 100644 index 0000000000..02a52bbe72 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/groups/update.mdx @@ -0,0 +1,45 @@ +--- +page_title: hcp iam groups update +description: |- + The "hcp iam groups update" command lets you update an existing group. +--- + +# hcp iam groups update + +Command: `hcp iam groups update` + +The `hcp iam groups update` command updates a group. + +Update can be used to update the display name or description of an existing +group. + +## Usage + +```shell-session +$ hcp iam groups update GROUP_NAME [Optional Flags] +``` + +## Examples + +Update a group's description: + +```shell-session +$ hcp iam groups update example-group \ + --description="updated description" \ + --display-name="new display name" +``` + +## Positional arguments + +- `GROUP_NAME` - The name of the group to update. The name may be specified as either: + + * The group's resource name. Formatted as + `iam/organization/ORG_ID/group/GROUP_NAME` + * The resource name suffix, `GROUP_NAME`. + +## Flags + +- `--description=NEW_DESCRIPTION` - New description for the group. + +- `--display-name=NEW_DISPLAY_NAME` - New display name for the group. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/index.mdx b/content/hcp-docs/content/docs/cli/commands/iam/index.mdx new file mode 100644 index 0000000000..5f0e3390d9 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/index.mdx @@ -0,0 +1,31 @@ +--- +page_title: hcp iam +description: |- + The "hcp iam" command lets you manage identity and access management. +--- + +# hcp iam + +Command: `hcp iam` + +The `hcp iam` command group lets you manage HCP identities including users, +groups, and service principals. + +Service principal keys or workload identity providers may also be managed. When +accessing HCP services from workloads that have an external identity provider, +prefer using workload identity federation for more secure access to HCP. + +## Usage + +```shell-session +$ hcp iam [Optional Flags] +``` + +## Command groups + +- [`roles`](/hcp/docs/cli/commands/iam/roles) - Interact with an organization's roles. +- [`users`](/hcp/docs/cli/commands/iam/users) - Manage an organization's users. +- [`groups`](/hcp/docs/cli/commands/iam/groups) - Manage HCP Groups. +- [`service-principals`](/hcp/docs/cli/commands/iam/service-principals) - Create and manage service principals. +- [`workload-identity-providers`](/hcp/docs/cli/commands/iam/workload-identity-providers) - Manage Workload Identity Providers. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/roles/index.mdx b/content/hcp-docs/content/docs/cli/commands/iam/roles/index.mdx new file mode 100644 index 0000000000..5f4c5accbe --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/roles/index.mdx @@ -0,0 +1,23 @@ +--- +page_title: hcp iam roles +description: |- + The "hcp iam roles" command lets you interact with an organization's roles. +--- + +# hcp iam roles + +Command: `hcp iam roles` + +The `hcp iam roles` command group lets you interact with an HCP organization's +roles. + +## Usage + +```shell-session +$ hcp iam roles [Optional Flags] +``` + +## Commands + +- [`list`](/hcp/docs/cli/commands/iam/roles/list) - List an organization's roles. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/roles/list.mdx b/content/hcp-docs/content/docs/cli/commands/iam/roles/list.mdx new file mode 100644 index 0000000000..7cf48fe467 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/roles/list.mdx @@ -0,0 +1,22 @@ +--- +page_title: hcp iam roles list +description: |- + The "hcp iam roles list" command lets you list an organization's roles. +--- + +# hcp iam roles list + +Command: `hcp iam roles list` + +The `hcp iam roles list` command lists the roles that exist for an HCP +organization. + +When referring to a role in an IAM binding, use the role's ID (e.g. +"roles/admin"). + +## Usage + +```shell-session +$ hcp iam roles list [Optional Flags] +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/service-principals/create.mdx b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/create.mdx new file mode 100644 index 0000000000..3a78ec1ff4 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/create.mdx @@ -0,0 +1,49 @@ +--- +page_title: hcp iam service-principals create +description: |- + The "hcp iam service-principals create" command lets you create a new service principal. +--- + +# hcp iam service-principals create + +Command: `hcp iam service-principals create` + +The `hcp iam service-principals create` command creates a new service +principal. + +Once a service principal is created, access to the service principal can be +granted by generating keys using the `hcp iam service-principals keys create` +command or by federating access using an external workload identity provider +using `hcp iam service-principals workload-identity-provider create`. + +Service principals can be created at the organization scope or project scope. It +is recommended to create service principals at the project scope to limit access +to the service principal and to locate the service principal near the resources +it will be accessing. + +To create an organization service principal, set the `--project` flag to `-`. + +## Usage + +```shell-session +$ hcp iam service-principals create SP_NAME [Optional Flags] +``` + +## Examples + +Create a new service principal: + +```shell-session +$ hcp iam service-principals create example-sp +``` + +Create a new organization service principal: + +```shell-session +$ hcp iam service-principals create example-sp --project="-" +``` + +## Positional arguments + +- `SP_NAME` - The name of the service principal to create. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/service-principals/delete.mdx b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/delete.mdx new file mode 100644 index 0000000000..5ddecaf3ad --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/delete.mdx @@ -0,0 +1,50 @@ +--- +page_title: hcp iam service-principals delete +description: |- + The "hcp iam service-principals delete" command lets you delete a service principal. +--- + +# hcp iam service-principals delete + +Command: `hcp iam service-principals delete` + +The `hcp iam service-principals delete` command deletes a service principal. + +Once the service-principal is deleted, all IAM policy that bound the service +principal will be updated. + +To delete an organization service principal, pass the service principal's +resource name or set the --project flag to "-" and pass its resource name +suffix. + +## Usage + +```shell-session +$ hcp iam service-principals delete SP_NAME [Optional Flags] +``` + +## Examples + +Delete a service principal using its name suffix: + +```shell-session +$ hcp iam service-principals delete example-sp +``` + +Delete a service principal using its resource name: + +```shell-session +$ hcp iam service-principals delete \ + iam/project/example-project/service-principal/example-sp +``` + +## Positional arguments + +- `SP_NAME` - The name of the service principal to delete. The name may be specified as + either: + + * The service principal's resource name. Formatted as one of the following: + * `iam/project/PROJECT_ID/service-principal/SP_NAME` + * `iam/organization/ORG_ID/service-principal/SP_NAME` + * The resource name suffix, `SP_NAME`. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/service-principals/index.mdx b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/index.mdx new file mode 100644 index 0000000000..1298e2e2b2 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/index.mdx @@ -0,0 +1,50 @@ +--- +page_title: hcp iam service-principals +description: |- + The "hcp iam service-principals" command lets you create and manage service principals. +--- + +# hcp iam service-principals + +Command: `hcp iam service-principals` + +The `hcp iam service-principals` command group lets you create and manage +service principals. + +A service principals is a principal that is typically used by an application or +workload that interacts with HCP. Your application uses the service principal to +authenticate to HCP so that users aren't directly involved. + +Because service principals are principals, you can grant it permissions by +granting a role. Refer to the examples for guidance. + +## Usage + +```shell-session +$ hcp iam service-principals [Optional Flags] +``` + +## Aliases + +- `sp`. For example: `hcp iam sp ` + +## Examples + +Create a new service principal and grant it "admin" on the project set in the profile: + +```shell-session +$ hcp iam service-principals create my-app --format=json +$ hcp projects add-iam-binding --member=my-app-sp-id --role=roles/admin +``` + +## Command groups + +- [`keys`](/hcp/docs/cli/commands/iam/service-principals/keys) - Create and manage service principals keys. + +## Commands + +- [`create`](/hcp/docs/cli/commands/iam/service-principals/create) - Create a new service principal. +- [`delete`](/hcp/docs/cli/commands/iam/service-principals/delete) - Delete a service principal. +- [`list`](/hcp/docs/cli/commands/iam/service-principals/list) - List service principals. +- [`read`](/hcp/docs/cli/commands/iam/service-principals/read) - Show metadata for the given service principal. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/service-principals/keys/create.mdx b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/keys/create.mdx new file mode 100644 index 0000000000..a4ee2c3b9d --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/keys/create.mdx @@ -0,0 +1,67 @@ +--- +page_title: hcp iam service-principals keys create +description: |- + The "hcp iam service-principals keys create" command lets you create a new service principal key. +--- + +# hcp iam service-principals keys create + +Command: `hcp iam service-principals keys create` + +The `hcp iam service-principals keys create` command creates a new service +principal key. + +To output the generated keys to a credential file, pass the --output-cred-file +flag. The credential file can be used to authenticate as the service principal. +The benefit of using the credential file is that it avoids printing the Client +ID and Client Secret to the terminal, and allows the credentials to be stored in +a way that is less likely to leak into shell history. The HCP CLI allows +authenticating via credential files using `hcp auth login --cred-file=PATH`. +Prefer using credential files if your workflow allows it. + +To create a key for an organization service principal, pass the service +principal's resource name or set the `--project` flag to `-` and pass its +resource name suffix. + +## Usage + +```shell-session +$ hcp iam service-principals keys create SP_NAME [Optional Flags] +``` + +## Examples + +Create a new service principal key: + +```shell-session +$ hcp iam service-principals keys create my-service-principal +``` + +Create a new service principal key specifying the resource name of the service principal: + +```shell-session +$ hcp iam service-principals keys create \ + iam/project/123/service-principal/my-service-principal +``` + +Output the new service principal key to a credential file: + +```shell-session +$ hcp iam service-principals keys create my-service-principal \ + --output-cred-file=my-service-principal-creds.json +``` + +## Positional arguments + +- `SP_NAME` - The name of the service principal to create a key for. The name may be + specified as either: + + * The service principal's resource name. Formatted as one of the following: + * `iam/project/PROJECT_ID/service-principal/SP_NAME` + * `iam/organization/ORG_ID/service-principal/SP_NAME` + * The resource name suffix, `SP_NAME`. + +## Flags + +- `--output-cred-file=PATH` - Output the created service principal key to a credential file. The file type must be json. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/service-principals/keys/delete.mdx b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/keys/delete.mdx new file mode 100644 index 0000000000..bb3dde6e01 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/keys/delete.mdx @@ -0,0 +1,32 @@ +--- +page_title: hcp iam service-principals keys delete +description: |- + The "hcp iam service-principals keys delete" command lets you delete a service principal key. +--- + +# hcp iam service-principals keys delete + +Command: `hcp iam service-principals keys delete` + +The `hcp iam service-principals keys delete` command deletes a service +principal key. + +## Usage + +```shell-session +$ hcp iam service-principals keys delete KEY_NAME [Optional Flags] +``` + +## Examples + +Create a new service principal key: + +```shell-session +$ hcp iam service-principals keys delete \ + iam/project/example/service-principal/example-sp/key/3KgtSLWTSs +``` + +## Positional arguments + +- `KEY_NAME` - The resource name of the service principal key to delete. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/service-principals/keys/index.mdx b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/keys/index.mdx new file mode 100644 index 0000000000..587a9cfc07 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/keys/index.mdx @@ -0,0 +1,28 @@ +--- +page_title: hcp iam service-principals keys +description: |- + The "hcp iam service-principals keys" command lets you create and manage service principals keys. +--- + +# hcp iam service-principals keys + +Command: `hcp iam service-principals keys` + +The `hcp iam service-principals keys` command group lets you create and manage +service principals keys. + +A service principal key is the credential used by a service principal to +authenticate with HCP and should be treated as a secret. + +## Usage + +```shell-session +$ hcp iam service-principals keys [Optional Flags] +``` + +## Commands + +- [`create`](/hcp/docs/cli/commands/iam/service-principals/keys/create) - Create a new service principal key. +- [`list`](/hcp/docs/cli/commands/iam/service-principals/keys/list) - List a service principal's keys. +- [`delete`](/hcp/docs/cli/commands/iam/service-principals/keys/delete) - Delete a service principal key. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/service-principals/keys/list.mdx b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/keys/list.mdx new file mode 100644 index 0000000000..9da7dd7aa9 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/keys/list.mdx @@ -0,0 +1,48 @@ +--- +page_title: hcp iam service-principals keys list +description: |- + The "hcp iam service-principals keys list" command lets you list a service principal's keys. +--- + +# hcp iam service-principals keys list + +Command: `hcp iam service-principals keys list` + +The `hcp iam service-principals keys list` command lists a service principal's +keys. + +To list keys for an organization service principal, pass the service +principal's resource name or set the `--project` flag to `-` and pass its +resource name suffix. + +## Usage + +```shell-session +$ hcp iam service-principals keys list SP_NAME [Optional Flags] +``` + +## Examples + +List a service principal's keys: + +```shell-session +$ hcp iam service-principals keys list my-service-principal +``` + +List a service principal's keys specifying the resource name of the service principal: + +```shell-session +$ hcp iam service-principals keys list \ + iam/project/123/service-principal/my-service-principal +``` + +## Positional arguments + +- `SP_NAME` - The name of the service principal to list keys for. The name may be specified + as either: + + * The service principal's resource name. Formatted as one of the following: + * `iam/project/PROJECT_ID/service-principal/SP_NAME` + * `iam/organization/ORG_ID/service-principal/SP_NAME` + * The resource name suffix, `SP_NAME`. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/service-principals/list.mdx b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/list.mdx new file mode 100644 index 0000000000..7d2ebf02d6 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/list.mdx @@ -0,0 +1,20 @@ +--- +page_title: hcp iam service-principals list +description: |- + The "hcp iam service-principals list" command lets you list service principals. +--- + +# hcp iam service-principals list + +Command: `hcp iam service-principals list` + +The `hcp iam service-principals list` command lists the service principals. + +To list organization service principals, set the `--project` flag to `-`. + +## Usage + +```shell-session +$ hcp iam service-principals list [Optional Flags] +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/service-principals/read.mdx b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/read.mdx new file mode 100644 index 0000000000..2e142e679c --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/service-principals/read.mdx @@ -0,0 +1,47 @@ +--- +page_title: hcp iam service-principals read +description: |- + The "hcp iam service-principals read" command lets you show metadata for the given service principal. +--- + +# hcp iam service-principals read + +Command: `hcp iam service-principals read` + +The `hcp iam service-principals read` command reads details about the given +service principal. + +To read an organization service principal, pass the service principal's +resource name or set the `--project` flag to `-` and pass its resource name +suffix. + +## Usage + +```shell-session +$ hcp iam service-principals read SP_NAME [Optional Flags] +``` + +## Examples + +Read the service principal using the resource name suffix "example-sp": + +```shell-session +$ hcp iam service-principals read example-sp +``` + +Read the service principal using the service principal's resource name: + +```shell-session +$ hcp iam service-principals read \ + iam/project/example-project/service-principal/example-sp +``` + +## Positional arguments + +- `SP_NAME` - The name of the service principal to read. The name may be specified as either: + + * The service principal's resource name. Formatted as one of the following: + * `iam/project/PROJECT_ID/service-principal/SP_NAME` + * `iam/organization/ORG_ID/service-principal/SP_NAME` + * The resource name suffix, `SP_NAME`. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/users/delete.mdx b/content/hcp-docs/content/docs/cli/commands/iam/users/delete.mdx new file mode 100644 index 0000000000..03aef988a2 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/users/delete.mdx @@ -0,0 +1,30 @@ +--- +page_title: hcp iam users delete +description: |- + The "hcp iam users delete" command lets you delete a user from the organization. +--- + +# hcp iam users delete + +Command: `hcp iam users delete` + +The `hcp iam users delete` command deletes a user from the organization. + +## Usage + +```shell-session +$ hcp iam users delete ID [Optional Flags] +``` + +## Examples + +Delete a user: + +```shell-session +$ hcp iam users delete example-id-123 +``` + +## Positional arguments + +- `ID` - The ID of the user to delete. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/users/index.mdx b/content/hcp-docs/content/docs/cli/commands/iam/users/index.mdx new file mode 100644 index 0000000000..086797a90d --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/users/index.mdx @@ -0,0 +1,25 @@ +--- +page_title: hcp iam users +description: |- + The "hcp iam users" command lets you manage an organization's users. +--- + +# hcp iam users + +Command: `hcp iam users` + +The `hcp iam users` command group lets you manage the users of an HCP +organization. + +## Usage + +```shell-session +$ hcp iam users [Optional Flags] +``` + +## Commands + +- [`list`](/hcp/docs/cli/commands/iam/users/list) - List the organization's users. +- [`read`](/hcp/docs/cli/commands/iam/users/read) - Show metadata for the given user. +- [`delete`](/hcp/docs/cli/commands/iam/users/delete) - Delete a user from the organization. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/users/list.mdx b/content/hcp-docs/content/docs/cli/commands/iam/users/list.mdx new file mode 100644 index 0000000000..33ecb1d0b4 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/users/list.mdx @@ -0,0 +1,18 @@ +--- +page_title: hcp iam users list +description: |- + The "hcp iam users list" command lets you list the organization's users. +--- + +# hcp iam users list + +Command: `hcp iam users list` + +The `hcp iam users list` command lists the users for an HCP organization. + +## Usage + +```shell-session +$ hcp iam users list [Optional Flags] +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/users/read.mdx b/content/hcp-docs/content/docs/cli/commands/iam/users/read.mdx new file mode 100644 index 0000000000..55a5d3d832 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/users/read.mdx @@ -0,0 +1,30 @@ +--- +page_title: hcp iam users read +description: |- + The "hcp iam users read" command lets you show metadata for the given user. +--- + +# hcp iam users read + +Command: `hcp iam users read` + +The `hcp iam users read` command reads details about the given user. + +## Usage + +```shell-session +$ hcp iam users read ID [Optional Flags] +``` + +## Examples + +Read the user principal with ID "example-id-123": + +```shell-session +$ hcp iam users read example-id-123 +``` + +## Positional arguments + +- `ID` - The ID of the user to read. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/create-aws.mdx b/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/create-aws.mdx new file mode 100644 index 0000000000..abbef73925 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/create-aws.mdx @@ -0,0 +1,82 @@ +--- +page_title: hcp iam workload-identity-providers create-aws +description: |- + The "hcp iam workload-identity-providers create-aws" command lets you create an AWS Workload Identity Provider. +--- + +# hcp iam workload-identity-providers create-aws + +Command: `hcp iam workload-identity-providers create-aws` + +The `hcp iam workload-identity-providers create-aws` command creates a new AWS +workload identity provider. + +Once created, workloads running in the specified AWS account can exchange their +AWS identity for an HCP access token which maps to the identity of the specified +service principal. + +The conditional access statement can restrict which AWS roles are allowed to +exchange their identity for an HCP access token. The condtional access statement +is a hashicorp/go-bexpr string that is evaluated when exchanging tokens. It has +access to the following variables: + + * `aws.arn`: The AWS ARN associated with the calling entity. + * `aws.account_id`: The AWS account ID number of the account that owns + or contains the calling entity. + * `aws.user_id`: The unique identifier of the calling entity. + +An example conditional access statement that restricts access to a specific role +is, `'aws.arn matches "arn:aws:iam::123456789012:role/example-role/*"'`. + +To aide in creating the conditional access statement, run `aws sts +get-caller-identity` on the AWS workload to determine the values that will be +available to the conditional access statement. + +## Usage + +```shell-session +$ hcp iam workload-identity-providers create-aws PROVIDER_NAME + --account-id=AWS_ACCOUNT_ID --conditional-access=STATEMENT + --service-principal=RESOURCE_NAME [Optional Flags] +``` + +## Examples + +Create a provider that allows exchanging identities for AWS workloads with role "example-role": + +```shell-session +$ hcp iam workload-identity-providers create-aws aws-my-role \ + --service-principal=iam/project/PROJECT/service-principal/example-sp \ + --account-id=123456789012 \ + --conditional-access='aws.arn matches "arn:aws:iam::123456789012:role/example-role/*"' \ + --description="Allow exchanging AWS workloads that have role example-role" +``` + +## Positional arguments + +- `PROVIDER_NAME` - The name of the provider to create. + +## Required flags + +- `--account-id=AWS_ACCOUNT_ID` - The ID of the AWS account for which identity exchange will be allowed. + +- `--conditional-access=STATEMENT` - The conditional access statement is a hashicorp/go-bexpr string that is + evaluated when exchanging tokens. It restricts which upstream identities are + allowed to access the service principal. + + The conditional_access statement can access the following variables: + + * `aws.arn`: The AWS ARN associated with the calling entity. + * `aws.account_id`: The AWS account ID number of the account that owns + or contains the calling entity. + * `aws.user_id`: The unique identifier of the calling entity. + + For details on the values of each variable, refer to the [AWS + documentation](https://docs.aws.amazon.com/STS/latest/APIReference/API_GetCallerIdentity.html#API_GetCallerIdentity_ResponseElements). + +- `--service-principal=RESOURCE_NAME` - The resource name of the service principal to create the provider for. + +## Optional flags + +- `--description=TEXT` - A description of the provider. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/create-cred-file.mdx b/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/create-cred-file.mdx new file mode 100644 index 0000000000..72b578b075 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/create-cred-file.mdx @@ -0,0 +1,242 @@ +--- +page_title: hcp iam workload-identity-providers create-cred-file +description: |- + The "hcp iam workload-identity-providers create-cred-file" command lets you create a credential file. +--- + +# hcp iam workload-identity-providers create-cred-file + +Command: `hcp iam workload-identity-providers create-cred-file` + +The `hcp iam workload-identity-providers create-cred-file` command creates a +credential file that allow access authenticating to HCP from a variety of +external accounts. + +The generated credential file contains details on how to obtain the credential +from the external identity provider and how to exchange them for an HCP access +token. + +After creating the credential file, the HCP CLI can be authenticated by the +workload by running `hcp auth login --cred-file=PATH` where PATH is the path to +the generated credential file. + +## Usage + +```shell-session +$ hcp iam workload-identity-providers create-cred-file + WORKLOAD_IDENTITY_PROVIDER_NAME --output-file=PATH [Optional Flags] +``` + +## Examples + +Create a credential file for an AWS workload: + +```shell-session +# Set the --imdsv1 flag if the AWS instance metadata service is using version 1. +$ hcp iam workload-identity-providers create-cred-file \ + iam/project/123/service-principal/my-sp/workload-identity-provider/aws \ + --aws \ + --output-file=credentials.json +``` + +Create a credential file for a GCP workload: + +```shell-session +$ hcp iam workload-identity-providers create-cred-file \ + iam/project/123/service-principal/my-sp/workload-identity-provider/gcp \ + --gcp \ + --output-file=credentials.json +``` + +Create a credential file for an Azure workload using a User Managed Identity: + +```shell-session +$ hcp iam workload-identity-providers create-cred-file \ + iam/project/123/service-principal/my-sp/workload-identity-provider/azure \ + --azure \ + --azure-resource=MANAGED_IDENTITY_CLIENT_ID \ + --output-file=credentials.json +``` + +Create a credential file for an Azure workload that has multiple User Managed Identities: + +```shell-session +$ hcp iam workload-identity-providers create-cred-file \ + iam/project/123/service-principal/my-sp/workload-identity-provider/azure \ + --azure \ + --azure-resource=MANAGED_IDENTITY_CLIENT_ID \ + --azure-client-id=MANAGED_IDENTITY_CLIENT_ID \ + --output-file=credentials.json +``` + +Create a credential file for an Azure workload that is using a Managed Identity to authenticate as a Entra ID Application: + +```shell-session +# ENTRA_ID_APP_ID_URL generally has the form "api://123-456-678-901" +$ hcp iam workload-identity-providers create-cred-file \ + iam/project/123/service-principal/my-sp/workload-identity-provider/azure \ + --azure \ + --azure-resource=ENTRA_ID_APP_ID_URI \ + --azure-client-id=MANAGED_IDENTITY_CLIENT_ID \ + --output-file=credentials.json +``` + +Create a credential file that sources the token from a file: + +```shell-session +# Assuming the file has the following JSON payload: +# { +# "access_token": "eyJ0eXAiOiJKV1Qi...", +# ... +# } +$ hcp iam workload-identity-providers create-cred-file \ + iam/project/123/service-principal/my-sp/workload-identity-provider/k8s \ + --source-file=/var/run/secrets/tokens/hcp_token \ + --source-json-pointer=/access_token \ + --output-file=credentials.json +``` + +Create a credential file that sources the token from a file: + +```shell-session +# Assuming the file only contains the access token: +$ hcp iam workload-identity-providers create-cred-file \ + iam/project/123/service-principal/my-sp/workload-identity-provider/k8s \ + --source-file \ + --output-file=credentials.json +``` + +Create a credential file that sources the token from an URL: + +```shell-session +# Assuming the response has the following JSON payload: +# { +# "access_token": "eyJ0eXAiOiJKV1Qi...", +# ... +# } +$ hcp iam workload-identity-providers create-cred-file \ + iam/project/123/service-principal/my-sp/workload-identity-provider/example \ + --source-url="https://example-oidc-provider.com/token" \ + --source-json-pointer=/access_token \ + --output-file=credentials.json +``` + +Create a credential file that sources the token from an URL: + +```shell-session +# Assuming the file only contains the access token: +$ hcp iam workload-identity-providers create-cred-file \ + iam/project/123/service-principal/my-sp/workload-identity-provider/example \ + --source-url=https://example-oidc-provider.com/token \ + --output-file=credentials.json +``` + +Create a credential file that sources the token from an URL: + +```shell-session +# To add headers to the request, use the --source-header flag: +$ hcp iam workload-identity-providers create-cred-file \ + iam/project/123/service-principal/my-sp/workload-identity-provider/example \ + --source-url=https://example-oidc-provider.com/token \ + --source-header=Metadata=True \ + --source-header=Token=Identity \ + --output-file=credentials.json +``` + +Create a credential file that sources the token from an environment variable: + +```shell-session +# Assuming the environment variable has the following JSON string value: +# { +# "access_token": "eyJ0eXAiOiJKV1Qi...", +# ... +# } +$ hcp iam workload-identity-providers create-cred-file \ + iam/project/123/service-principal/my-sp/workload-identity-provider/example \ + --source-env=ACCESS_TOKEN \ + --source-json-pointer=/access_token \ + --output-file=credentials.json +``` + +Create a credential file that sources the token from an environment variable: + +```shell-session +# Assuming the environment variable only contains the access token: +$ hcp iam workload-identity-providers create-cred-file \ + iam/project/123/service-principal/my-sp/workload-identity-provider/example \ + --source-env=ACCESS_TOKEN \ + --output-file=credentials.json +``` + +## Positional arguments + +- `WORKLOAD_IDENTITY_PROVIDER_NAME` - The resource name of the provider for which the external identity will be exchanged against using the credential file. + +## Required flags + +- `--output-file=PATH` - The path to output the credential file. + +## Optional flags + +- `--aws` - Set if exchanging an AWS workload identity. + +- `--azure` - Set if exchanging an Azure workload identity. + +- `--azure-client-id=ID` - In the case that the workload has multiple User Assigned Managed Identities, + this flag specifies which Client ID should be used to retrieve the Azure + identity token. + + If the workload only has one User Assigned Managed Identity, this flag is not + required. + +- `--azure-resource=URI` - The Azure Instance Metadata Service (IMDS) allows retrieving an access token + for a specific resource. The audience (aud) claim in the returned token is set + to the value of the resource parameter. As such, the azure-resource flag must be + set to one of the allowed audiences for the Workload Identity Provider. + + The typical values for this flag are: + + * The Client ID of the User Assigned Managed Identity (UUID) + * The Application ID URI of the Microsoft Entra ID Application + (`api://123-456-678-901`). + + For more details on the resource parameter, see the [Azure + documentation](https://learn.microsoft.com/en-us/entra/identity/managed-identities-azure-resources/how-to-use-vm-token#get-a-token-using-http). + +- `--gcp` - Set if exchanging an GCP workload identity. + + It is assumed the workload identity provider was created with the issuer URI set + to `https://accounts.google.com` and the default allowed audiences. + +- `--imdsv1` - Set if the AWS instance metadata service is using version 1. + +- `--source-env=VAR` - The environment variable name that contains the credential to exchange. + +- `--source-file=PATH` - Path to file that contains the credential to exchange. + +- `--source-header=KEY=VALUE [Repeatable]` - Headers to send to the URL when obtaining the credential. + +- `--source-json-pointer=/PATH/TO/CREDENTIAL` - A JSON pointer that indicates how to access the credential from a JSON. If used + with the `source-url` flag, the pointer is used to extract the credential from + the JSON response from calling the URL. If used with the `source-file` flag, the + pointer is used to extract the credential read from the JSON file. Similarly, if + used with the `source-env` flag, the pointer is used to extract the credential + from the environment variable whose value is a JSON object. + + As an example, if the JSON payload containing the credential file is: + + ```json hideClipboard + { + "access_token": "credentials", + "nested": { + "access_token": "nested-credentials" + } + } + ``` + + You can access the top level access token using the pointer `/access_token` and + the nested access token can be accessed using the pointer + `/nested/access_token`. + +- `--source-url=URL` - URL to obtain the credential from. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/create-oidc.mdx b/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/create-oidc.mdx new file mode 100644 index 0000000000..176a682ae2 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/create-oidc.mdx @@ -0,0 +1,115 @@ +--- +page_title: hcp iam workload-identity-providers create-oidc +description: |- + The "hcp iam workload-identity-providers create-oidc" command lets you create an OIDC Workload Identity Provider. +--- + +# hcp iam workload-identity-providers create-oidc + +Command: `hcp iam workload-identity-providers create-oidc` + +The `hcp iam workload-identity-providers create-oidc` command creates a new +OIDC based workload identity provider. + +Common OIDC providers include Azure, GCP, Kubernetes Clusters, HashiCorp Vault, +GitHub, GitLab, and more. + +When creating an OIDC provider, you must specify the issuer URL, the conditional +access statement, and optionally the allowed audiences. + +The issuer URL is the URL of the OIDC provider that is allowed to exchange +workload identities. The URL must be a valid URL that is reachable from the HCP +control plane, and must match the issuer set in the response to the OIDC +discovery endpoint (${issuer_url}/.well-known/openid-configuration). + +The conditional access statement must be set and is used to restrict which +tokens issued by the OIDC provider are allowed to exchange their identity for an +HCP access token. The condtional access statement is a hashicorp/go-bexpr string +that is evaluated when exchanging tokens. It has access to all the claims in the +external identity token and they can be accessed via the +"jwt_claims." syntax. An example conditional access statement that +restricts access to a specific subject claim is 'jwt_claims.sub == "example"'. + +If unset, the allowed audiences will default to the resource name of the +provider. The format will be: +`iam/project/PROJECT_ID/service-principal/SP_NAME/workload-identity-provider/WIP_NAME`. +If set, the presented access token must have an audience that is contained in +the set of allowed audiences. + +## Usage + +```shell-session +$ hcp iam workload-identity-providers create-oidc PROVIDER_NAME + --conditional-access=STATEMENT --issuer=URI --service-principal=RESOURCE_NAME + [Optional Flags] +``` + +## Examples + +Azure - Allow exchanging a User Managed Identity: + +```shell-session +$ hcp iam workload-identity-providers create-oidc azure-example-user-managed \ + --service-principal=iam/project/PROJECT/service-principal/example-sp \ + --issuer=https://sts.windows.net/AZURE_AD_TENANT_ID/ \ + --allowed-audience=MANAGED_IDENTITY_CLIENT_ID \ + --conditional-access='jwt_claims.sub == "MANAGED_IDENTITY_OBJECT_PRINCIPAL_ID"' \ + --description="Azure User Managed Identity Example" +``` + +GCP - Allow exchanging a Service Account Identity + +[Full List of +claims](https://cloud.google.com/compute/docs/instances/verifying-instance-identity#payload): + +```shell-session +$ hcp iam workload-identity-providers create-oidc gcp-example-service-account \ + --service-principal=iam/project/PROJECT/service-principal/example-sp \ + --issuer=https://accounts.google.com \ + --conditional-access='jwt_claims.sub == "SERVICE_ACCOUNT_UNIQUE_ID"' \ + --description="GCP Service Account Example" +``` + +GitLab - Allow exchanging a GitLab + +[Full list of +claims](https://docs.gitlab.com/ee/ci/secrets/id_token_authentication.html#token-payload): + +```shell-session +$ hcp iam workload-identity-providers create-oidc gcp-example-service-account \ + --service-principal=iam/project/PROJECT/service-principal/example-sp \ + --issuer=https://gitlab.com \ + --conditional-access='jwt_claims.project_path == "example-org/example-repo" and jwt_cliams.job_id == 302' \ + --description="GitLab example-repo access for job 302" +``` + +## Positional arguments + +- `PROVIDER_NAME` - The name of the provider to create. + +## Required flags + +- `--conditional-access=STATEMENT` - The conditional access statement is a hashicorp/go-bexpr string that is + evaluated when exchanging tokens. It restricts which upstream identities are + allowed to access the service principal. + + The conditional_access statement can access any claim from the external identity + token using the `jwt_claims.` syntax. As an example, access the + subject claim with `jwt_claims.sub`. + +- `--issuer=URI` - The URL of the OIDC Issuer that is allowed to exchange workload identities. + +- `--service-principal=RESOURCE_NAME` - The resource name of the service principal to create the provider for. + +## Optional flags + +- `--allowed-audience=AUD [Repeatable]` - The set of audiences set on the access token that are allowed to exchange + identities. The access token must have an audience that is contained in this + set. + + If no audience is set, the default allowed audience will be the resource name of + the provider. The format will be: + `iam/project/PROJECT_ID/service-principal/SP_NAME/workload-identity-provider/WIP_NAME`. + +- `--description=TEXT` - A description of the provider. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/delete.mdx b/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/delete.mdx new file mode 100644 index 0000000000..bec03e6f35 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/delete.mdx @@ -0,0 +1,34 @@ +--- +page_title: hcp iam workload-identity-providers delete +description: |- + The "hcp iam workload-identity-providers delete" command lets you delete a workload identity provider. +--- + +# hcp iam workload-identity-providers delete + +Command: `hcp iam workload-identity-providers delete` + +The `hcp iam workload-identity-providers delete` command deletes a workload +identity provider. + +## Usage + +```shell-session +$ hcp iam workload-identity-providers delete WIP_NAME [Optional Flags] +``` + +## Examples + +Delete a workload identity provider: + +```shell-session +$ hcp iam workload-identity-provider delete \ + iam/project/my-project/service-principal/my-sp/workload-identity-provider/example-wip +``` + +## Positional arguments + +- `WIP_NAME` - The resource name of the workload identity provider to delete. The format of + the resource name is + `iam/project/PROJECT_ID/service-principal/SP_NAME/workload-identity-provider/WIP_NAME`. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/index.mdx b/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/index.mdx new file mode 100644 index 0000000000..212c8e00ac --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/index.mdx @@ -0,0 +1,51 @@ +--- +page_title: hcp iam workload-identity-providers +description: |- + The "hcp iam workload-identity-providers" command lets you manage Workload Identity Providers. +--- + +# hcp iam workload-identity-providers + +Command: `hcp iam workload-identity-providers` + +The `hcp iam workload-identity-providers` command group lets you create and +manage Workload Identity Providers. + +Creating a workload identity provider creates a trust relationship between HCP +and an external identity provider. Once created, a workload can exchange its +external identity token for an HCP access token. + +HCP supports federating with AWS or any OIDC identity provider. This allows +exchanging identity credentials for workloads running on AWS, GCP, Azure, GitHub +Actions, Kubernetes, and more for an HCP Service Principal access token without +having to store service principal credentials. + +To make exchanging external credentials as easy as possible, create a credential +file using `hcp iam workload-identity-providers create-cred-file` after creating +your provider. + +The credential file contains details on how to source the external identity +token and exchange it for an HCP access token. The `hcp` CLI can be +authenticated using a credential file by running `hcp auth login --cred-file`. +For programatic access, the HCP Go SDK can be used and authenticated using a +credential file. + +## Usage + +```shell-session +$ hcp iam workload-identity-providers [Optional Flags] +``` + +## Aliases + +- `wips`. For example: `hcp iam wips ` + +## Commands + +- [`create-aws`](/hcp/docs/cli/commands/iam/workload-identity-providers/create-aws) - Create an AWS Workload Identity Provider. +- [`create-oidc`](/hcp/docs/cli/commands/iam/workload-identity-providers/create-oidc) - Create an OIDC Workload Identity Provider. +- [`create-cred-file`](/hcp/docs/cli/commands/iam/workload-identity-providers/create-cred-file) - Create a credential file. +- [`delete`](/hcp/docs/cli/commands/iam/workload-identity-providers/delete) - Delete a workload identity provider. +- [`list`](/hcp/docs/cli/commands/iam/workload-identity-providers/list) - List workload identity providers. +- [`read`](/hcp/docs/cli/commands/iam/workload-identity-providers/read) - Show metadata about a workload identity provider. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/list.mdx b/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/list.mdx new file mode 100644 index 0000000000..db4a839ad3 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/list.mdx @@ -0,0 +1,44 @@ +--- +page_title: hcp iam workload-identity-providers list +description: |- + The "hcp iam workload-identity-providers list" command lets you list workload identity providers. +--- + +# hcp iam workload-identity-providers list + +Command: `hcp iam workload-identity-providers list` + +The `hcp iam workload-identity-providers list` command lists the workload +identity providers for a given service principal. + +## Usage + +```shell-session +$ hcp iam workload-identity-providers list SP_NAME [Optional Flags] +``` + +## Examples + +List workload identity provider given the service principal's resource name suffix: + +```shell-session +$ hcp iam workload-identity-provider list example-sp +``` + +List workload identity provider given the service principal's resource name: + +```shell-session +$ hcp iam workload-identity-provider list \ + iam/project/example-project/service-principal/example-sp +``` + +## Positional arguments + +- `SP_NAME` - The name of the service principal to list workload identity providers for. The + name may be specified as either: + + * The service principal's resource name. Formatted as one of the following: + * `iam/project/PROJECT_ID/service-principal/SP_NAME` + * `iam/organization/ORG_ID/service-principal/SP_NAME` + * The resource name suffix, `SP_NAME`. + diff --git a/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/read.mdx b/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/read.mdx new file mode 100644 index 0000000000..deca066000 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/iam/workload-identity-providers/read.mdx @@ -0,0 +1,34 @@ +--- +page_title: hcp iam workload-identity-providers read +description: |- + The "hcp iam workload-identity-providers read" command lets you show metadata about a workload identity provider. +--- + +# hcp iam workload-identity-providers read + +Command: `hcp iam workload-identity-providers read` + +The `hcp iam workload-identity-providers read` command shows metadata about the +specified workload identity provider. + +## Usage + +```shell-session +$ hcp iam workload-identity-providers read WIP_NAME [Optional Flags] +``` + +## Examples + +Read a workload identity provider: + +```shell-session +$ hcp iam workload-identity-provider read \ + iam/project/my-project/service-principal/my-sp/workload-identity-provider/example-wip +``` + +## Positional arguments + +- `WIP_NAME` - The resource name of the workload identity provider to read. The format of the + resource name is + `iam/project/PROJECT_ID/service-principal/SP_NAME/workload-identity-provider/WIP_NAME`. + diff --git a/content/hcp-docs/content/docs/cli/commands/index.mdx b/content/hcp-docs/content/docs/cli/commands/index.mdx new file mode 100644 index 0000000000..6a8585f33c --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/index.mdx @@ -0,0 +1,44 @@ +--- +page_title: hcp +description: |- + The "hcp" command lets you interact with HCP. +--- + +# hcp + +Command: `hcp` + +The HCP command-line interface (CLI) is a unified tool to manage your HCP services. + +## Usage + +```shell-session +$ hcp [Optional Flags] +``` + +## Command groups + +- [`auth`](/hcp/docs/cli/commands/auth) - Authenticate to HCP. +- [`projects`](/hcp/docs/cli/commands/projects) - Create and manage projects. +- [`profile`](/hcp/docs/cli/commands/profile) - View and edit HCP CLI properties. +- [`organizations`](/hcp/docs/cli/commands/organizations) - Interact with an existing organization. +- [`iam`](/hcp/docs/cli/commands/iam) - Manage identity and access management. +- [`waypoint`](/hcp/docs/cli/commands/waypoint) - Manage HCP Waypoint. +- [`vault-secrets`](/hcp/docs/cli/commands/vault-secrets) - Manage Vault Secrets. + +## Commands + +- [`version`](/hcp/docs/cli/commands/version) - Display the HCP CLI version. + +## Global flags + +- `--debug` - Enable debug output. + +- `--format=FORMAT` - Sets the output format. + +- `--profile=NAME` - The profile to use. If omitted, the currently selected profile will be used. + +- `--project=ID` - The HCP Project ID to use. If omitted the current project set in the configuration is used. + +- `--quiet` - Minimizes output and disables interactive prompting. + diff --git a/content/hcp-docs/content/docs/cli/commands/organizations/iam/add-binding.mdx b/content/hcp-docs/content/docs/cli/commands/organizations/iam/add-binding.mdx new file mode 100644 index 0000000000..0e776d8ebc --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/organizations/iam/add-binding.mdx @@ -0,0 +1,39 @@ +--- +page_title: hcp organizations iam add-binding +description: |- + The "hcp organizations iam add-binding" command lets you add an IAM policy binding for the organization. +--- + +# hcp organizations iam add-binding + +Command: `hcp organizations iam add-binding` + +The `hcp organizations iam add-binding` command adds an IAM policy binding for +the organization. A binding grants the specified principal the given role on the +organization. + +To view the available roles to bind, run `hcp iam roles list`. + +## Usage + +```shell-session +$ hcp organizations iam add-binding --member=PRINCIPAL_ID --role=ROLE_ID [Optional + Flags] +``` + +## Examples + +Bind a principal to role `roles/viewer`: + +```shell-session +$ hcp organizations iam add-binding \ + --member=ef938a22-09cf-4be9-b4d0-1f4587f80f53 \ + --role=roles/viewer +``` + +## Required flags + +- `--member=PRINCIPAL_ID` - The ID of the principal to add the role binding to. + +- `--role=ROLE_ID` - The role ID (e.g. "roles/admin", "roles/contributor", "roles/viewer") to bind the member to. + diff --git a/content/hcp-docs/content/docs/cli/commands/organizations/iam/delete-binding.mdx b/content/hcp-docs/content/docs/cli/commands/organizations/iam/delete-binding.mdx new file mode 100644 index 0000000000..c5c1749538 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/organizations/iam/delete-binding.mdx @@ -0,0 +1,38 @@ +--- +page_title: hcp organizations iam delete-binding +description: |- + The "hcp organizations iam delete-binding" command lets you delete an IAM policy binding for the organization. +--- + +# hcp organizations iam delete-binding + +Command: `hcp organizations iam delete-binding` + +The `hcp organizations iam delete-binding` command deletes an IAM policy +binding for the organization. A binding consists of a principal and a role. + +To view the existing role bindings, run `hcp organizations iam read-policy`. + +## Usage + +```shell-session +$ hcp organizations iam delete-binding --member=PRINCIPAL_ID --role=ROLE_ID + [Optional Flags] +``` + +## Examples + +Delete a role binding for a principal previously granted role `roles/viewer`: + +```shell-session +$ hcp organizations iam delete-binding \ + --member=ef938a22-09cf-4be9-b4d0-1f4587f80f53 \ + --role=roles/viewer +``` + +## Required flags + +- `--member=PRINCIPAL_ID` - The ID of the principal to remove the role binding from. + +- `--role=ROLE_ID` - The role ID (e.g. "roles/admin", "roles/contributor", "roles/viewer") to remove the member from. + diff --git a/content/hcp-docs/content/docs/cli/commands/organizations/iam/index.mdx b/content/hcp-docs/content/docs/cli/commands/organizations/iam/index.mdx new file mode 100644 index 0000000000..6ba26948db --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/organizations/iam/index.mdx @@ -0,0 +1,26 @@ +--- +page_title: hcp organizations iam +description: |- + The "hcp organizations iam" command lets you manage an organization's IAM policy. +--- + +# hcp organizations iam + +Command: `hcp organizations iam` + +The `hcp organizations iam` command group is used to manage an organization's +IAM Policy. + +## Usage + +```shell-session +$ hcp organizations iam [Optional Flags] +``` + +## Commands + +- [`add-binding`](/hcp/docs/cli/commands/organizations/iam/add-binding) - Add an IAM policy binding for the organization. +- [`delete-binding`](/hcp/docs/cli/commands/organizations/iam/delete-binding) - Delete an IAM policy binding for the organization. +- [`read-policy`](/hcp/docs/cli/commands/organizations/iam/read-policy) - Read the IAM policy for the organization. +- [`set-policy`](/hcp/docs/cli/commands/organizations/iam/set-policy) - Set the IAM policy for the organization. + diff --git a/content/hcp-docs/content/docs/cli/commands/organizations/iam/read-policy.mdx b/content/hcp-docs/content/docs/cli/commands/organizations/iam/read-policy.mdx new file mode 100644 index 0000000000..ea1ebc82bc --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/organizations/iam/read-policy.mdx @@ -0,0 +1,27 @@ +--- +page_title: hcp organizations iam read-policy +description: |- + The "hcp organizations iam read-policy" command lets you read the IAM policy for the organization. +--- + +# hcp organizations iam read-policy + +Command: `hcp organizations iam read-policy` + +The `hcp organizations iam read-policy` command reads the IAM policy for the +organization. + +## Usage + +```shell-session +$ hcp organizations iam read-policy [Optional Flags] +``` + +## Examples + +Read the IAM Policy for the organization: + +```shell-session +$ hcp organizations iam read-policy +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/organizations/iam/set-policy.mdx b/content/hcp-docs/content/docs/cli/commands/organizations/iam/set-policy.mdx new file mode 100644 index 0000000000..261cfc66d5 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/organizations/iam/set-policy.mdx @@ -0,0 +1,89 @@ +--- +page_title: hcp organizations iam set-policy +description: |- + The "hcp organizations iam set-policy" command lets you set the IAM policy for the organization. +--- + +# hcp organizations iam set-policy + +Command: `hcp organizations iam set-policy` + +The `hcp organizations iam set-policy` command sets the IAM policy for the +organization. Setting the entire policy must be done with great care. If adding +or removing a single principal from the policy, prefer using `hcp organizations +iam add-binding` and the related `hcp organizations iam delete-binding`. + +The policy file is expected to be a file encoded in JSON that contains the IAM +policy. + +The format for the policy JSON file is an object with the following format: + +```json +{ +{ + "bindings": [ + { + "role_id": "ROLE_ID", + "members": [ + { + "member_id": "PRINCIPAL_ID", + "member_type": "USER" | "GROUP" | "SERVICE_PRINCIPAL" + } + ] + } + ], + "etag": "ETAG" +} +``` + +If set, the etag of the policy must be equal to that of the existing policy. To +view the existing policy and its etag, run `hcp organizations iam read-policy +--format=json`. If unset, the existing policy's etag will be fetched and used. + +## Usage + +```shell-session +$ hcp organizations iam set-policy --policy-file=PATH [Optional Flags] +``` + +## Examples + +Set the IAM Policy for the organization: + +```shell-session +$ cat >policy.json < [Optional Flags] +``` + +## Aliases + +- `orgs`. For example: `hcp orgs ` + +## Command groups + +- [`iam`](/hcp/docs/cli/commands/organizations/iam) - Manage an organization's IAM policy. + +## Commands + +- [`read`](/hcp/docs/cli/commands/organizations/read) - Show metadata for the organization. +- [`list`](/hcp/docs/cli/commands/organizations/list) - List organizations. + diff --git a/content/hcp-docs/content/docs/cli/commands/organizations/list.mdx b/content/hcp-docs/content/docs/cli/commands/organizations/list.mdx new file mode 100644 index 0000000000..cb57e1e7f1 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/organizations/list.mdx @@ -0,0 +1,19 @@ +--- +page_title: hcp organizations list +description: |- + The "hcp organizations list" command lets you list organizations. +--- + +# hcp organizations list + +Command: `hcp organizations list` + +The `hcp organizations list` command lists the organizations the authenticated +principal is a member of. + +## Usage + +```shell-session +$ hcp organizations list [Optional Flags] +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/organizations/read.mdx b/content/hcp-docs/content/docs/cli/commands/organizations/read.mdx new file mode 100644 index 0000000000..d620b9afb5 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/organizations/read.mdx @@ -0,0 +1,22 @@ +--- +page_title: hcp organizations read +description: |- + The "hcp organizations read" command lets you show metadata for the organization. +--- + +# hcp organizations read + +Command: `hcp organizations read` + +The `hcp organizations read` command shows metadata for the organization. + +## Usage + +```shell-session +$ hcp organizations read ID [Optional Flags] +``` + +## Positional arguments + +- `ID` - ID of the organization to read. + diff --git a/content/hcp-docs/content/docs/cli/commands/profile/display.mdx b/content/hcp-docs/content/docs/cli/commands/profile/display.mdx new file mode 100644 index 0000000000..1a89b466c2 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/profile/display.mdx @@ -0,0 +1,18 @@ +--- +page_title: hcp profile display +description: |- + The "hcp profile display" command lets you display the active profile. +--- + +# hcp profile display + +Command: `hcp profile display` + +The `hcp profile display` command displays the active profile. + +## Usage + +```shell-session +$ hcp profile display [Optional Flags] +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/profile/get.mdx b/content/hcp-docs/content/docs/cli/commands/profile/get.mdx new file mode 100644 index 0000000000..1b24b3cabf --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/profile/get.mdx @@ -0,0 +1,56 @@ +--- +page_title: hcp profile get +description: |- + The "hcp profile get" command lets you get a HCP CLI Property. +--- + +# hcp profile get + +Command: `hcp profile get` + +The `hcp profile get` command gets the specified property in your active +profile. + +To view all currently set properties, run `hcp profile display`. + +## Usage + +```shell-session +$ hcp profile get COMPONENT/PROPERTY [Optional Flags] +``` + +## Positional arguments + +- `COMPONENT/PROPERTY` - Property to be get. Note that `COMPONENT/` is optional when referring to + top-level profile fields, i.e., such as `organization_id` and `project_id`. + + Using component names is required for setting other properties like + `core/output_format`. Consult the Available Properties section below for a + comprehensive list of properties. + +## Available Properties +* `organization_id` + * Organization ID of the HCP organization to operate on. + +* `project_id` + * Project ID of the HCP project to operate on by default. This can be overridden + by using the global `--project` flag. + +* `core` + + * `no_color` - If True, color will not be used when printing messages in the terminal. + + * `output_format` - Default output format for `hcp` commands. This is the equivalent of using the + global `--format` flag. Supported output formats: `pretty`, `table`, and `json`. + + * `quiet` - If True, prompts will be disabled and output will be minimized. + + * `verbosity` - Default logging verbosity for `hcp` commands. This is the equivalent of using + the global `--verbosity` flag. Supported log levels: `trace`, `debug`, `info`, + `warn`, and `error`. + +* `vault-secrets` + + * `app` - HCP Vault Secrets application name to operate on by default. + + diff --git a/content/hcp-docs/content/docs/cli/commands/profile/index.mdx b/content/hcp-docs/content/docs/cli/commands/profile/index.mdx new file mode 100644 index 0000000000..c19706ef07 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/profile/index.mdx @@ -0,0 +1,48 @@ +--- +page_title: hcp profile +description: |- + The "hcp profile" command lets you view and edit HCP CLI properties. +--- + +# hcp profile + +Command: `hcp profile` + +The `hcp profile` command group lets you initialize, set, view and unset +properties used by HCP CLI. + +A profile is a collection of properties/configuration values that inform the +behavior of `hcp` CLI. To initialize a profile, run `hcp profile init`. You can +create additional profiles using `hcp profile profiles create`. + +To switch between profiles, use `hcp profile profiles activate`. + +`hcp` has several global flags that have matching profile properties. Examples +are the `project_id` and `core/output_format` properties and their respective +flags `--project` and `--format`. The difference between properties and flags is +that flags apply only on the invoked command, while properties are persistent +across all invocations. Thus profiles allow you to conviently maintain the same +settings across command executions and multiple profiles allow you to easily +switch between different projects and settings. + +To run a command using a profile other than the active profile, pass the +`--profile` flag to the command. + +## Usage + +```shell-session +$ hcp profile [Optional Flags] +``` + +## Command groups + +- [`profiles`](/hcp/docs/cli/commands/profile/profiles) - Manage HCP profiles. + +## Commands + +- [`init`](/hcp/docs/cli/commands/profile/init) - Initialize the current profile. +- [`display`](/hcp/docs/cli/commands/profile/display) - Display the active profile. +- [`set`](/hcp/docs/cli/commands/profile/set) - Set a HCP CLI Property. +- [`unset`](/hcp/docs/cli/commands/profile/unset) - Unset a HCP CLI Property. +- [`get`](/hcp/docs/cli/commands/profile/get) - Get a HCP CLI Property. + diff --git a/content/hcp-docs/content/docs/cli/commands/profile/init.mdx b/content/hcp-docs/content/docs/cli/commands/profile/init.mdx new file mode 100644 index 0000000000..1ec25b456a --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/profile/init.mdx @@ -0,0 +1,27 @@ +--- +page_title: hcp profile init +description: |- + The "hcp profile init" command lets you initialize the current profile. +--- + +# hcp profile init + +Command: `hcp profile init` + +The `hcp profile init` command configures the HCP CLI to run commands against +the correct context; namely against the desired organization and project ID. +This command is interactive. To set configuration using non-interactively prefer +using `hcp profile set`. + +For a list of all available options, run `hcp config --help`. + +## Usage + +```shell-session +$ hcp profile init [Optional Flags] +``` + +## Flags + +- `--vault-secrets` - Initializes Vault Secrets configuration. + diff --git a/content/hcp-docs/content/docs/cli/commands/profile/profiles/activate.mdx b/content/hcp-docs/content/docs/cli/commands/profile/profiles/activate.mdx new file mode 100644 index 0000000000..acc3cb3739 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/profile/profiles/activate.mdx @@ -0,0 +1,30 @@ +--- +page_title: hcp profile profiles activate +description: |- + The "hcp profile profiles activate" command lets you activates an existing profile. +--- + +# hcp profile profiles activate + +Command: `hcp profile profiles activate` + +The `hcp profile profiles activate` command activates an existing profile. + +## Usage + +```shell-session +$ hcp profile profiles activate NAME [Optional Flags] +``` + +## Examples + +To active profile `my-profile`, run: + +```shell-session +$ hcp profile profiles activate my-profile +``` + +## Positional arguments + +- `NAME` - The name of the profile to activate. + diff --git a/content/hcp-docs/content/docs/cli/commands/profile/profiles/create.mdx b/content/hcp-docs/content/docs/cli/commands/profile/profiles/create.mdx new file mode 100644 index 0000000000..22ac3f2a01 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/profile/profiles/create.mdx @@ -0,0 +1,38 @@ +--- +page_title: hcp profile profiles create +description: |- + The "hcp profile profiles create" command lets you create a new HCP profile. +--- + +# hcp profile profiles create + +Command: `hcp profile profiles create` + +The `hcp profile profiles create` command creates a new named profile. + +Profile names start with a letter and may contain lower case letters a-z, upper +case letters A-Z, digits 0-9, and hyphens '-'. The maximum length for a profile +name is 64 characters. + +## Usage + +```shell-session +$ hcp profile profiles create NAME [Optional Flags] +``` + +## Examples + +To create a new profile, run: + +```shell-session +$ hcp profile profiles create my-profile +``` + +## Positional arguments + +- `NAME` - The name of the profile to create. + +## Flags + +- `--no-activate` - Disables automatic activation of the newly created profile. + diff --git a/content/hcp-docs/content/docs/cli/commands/profile/profiles/delete.mdx b/content/hcp-docs/content/docs/cli/commands/profile/profiles/delete.mdx new file mode 100644 index 0000000000..3a42f04c35 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/profile/profiles/delete.mdx @@ -0,0 +1,47 @@ +--- +page_title: hcp profile profiles delete +description: |- + The "hcp profile profiles delete" command lets you delete an existing HCP profile. +--- + +# hcp profile profiles delete + +Command: `hcp profile profiles delete` + +The `hcp profile profiles delete` command deletes an existing HCP profiles. If +the profile is the active profile, it may not be deleted. + +To delete the current active profile, first run `hcp profile profiles activate` +to active a different profile. + +## Usage + +```shell-session +$ hcp profile profiles delete PROFILE_NAMES [PROFILE_NAMES ...] [Optional Flags] +``` + +## Examples + +Delete a profile: + +```shell-session +$ hcp profile profiles delete my-profile +``` + +Delete multiple profiles: + +```shell-session +$ hcp profile profiles delete my-profile-1 my-profile-2 my-profile-3 +``` + +Delete the active profile: + +```shell-session +$ hcp profile profiles active my-other-profile $ hcp profile profiles delete +my-profile +``` + +## Positional arguments + +- `PROFILE_NAMES [PROFILE_NAMES ...]` - The name of the profile to delete. May not be the active profile. + diff --git a/content/hcp-docs/content/docs/cli/commands/profile/profiles/index.mdx b/content/hcp-docs/content/docs/cli/commands/profile/profiles/index.mdx new file mode 100644 index 0000000000..785e92e176 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/profile/profiles/index.mdx @@ -0,0 +1,30 @@ +--- +page_title: hcp profile profiles +description: |- + The "hcp profile profiles" command lets you manage HCP profiles. +--- + +# hcp profile profiles + +Command: `hcp profile profiles` + +The `hcp profile profiles` command group manages the set of named HCP +profiles. You can create new profiles using `hcp profile profiles create` and +activate existing profiles using `hcp profile profiles activate`. To run a +single command against a profile other than the active profile, run the command +with the flag `--profile`. + +## Usage + +```shell-session +$ hcp profile profiles [Optional Flags] +``` + +## Commands + +- [`create`](/hcp/docs/cli/commands/profile/profiles/create) - Create a new HCP profile. +- [`delete`](/hcp/docs/cli/commands/profile/profiles/delete) - Delete an existing HCP profile. +- [`list`](/hcp/docs/cli/commands/profile/profiles/list) - List existing HCP profiles. +- [`activate`](/hcp/docs/cli/commands/profile/profiles/activate) - Activates an existing profile. +- [`rename`](/hcp/docs/cli/commands/profile/profiles/rename) - Rename an existing profile. + diff --git a/content/hcp-docs/content/docs/cli/commands/profile/profiles/list.mdx b/content/hcp-docs/content/docs/cli/commands/profile/profiles/list.mdx new file mode 100644 index 0000000000..f0b13e6bd9 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/profile/profiles/list.mdx @@ -0,0 +1,26 @@ +--- +page_title: hcp profile profiles list +description: |- + The "hcp profile profiles list" command lets you list existing HCP profiles. +--- + +# hcp profile profiles list + +Command: `hcp profile profiles list` + +The `hcp profile profiles list` command lists existing HCP profiles. + +## Usage + +```shell-session +$ hcp profile profiles list [Optional Flags] +``` + +## Examples + +To list existing profiles, run: + +```shell-session +$ hcp profile profiles list +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/profile/profiles/rename.mdx b/content/hcp-docs/content/docs/cli/commands/profile/profiles/rename.mdx new file mode 100644 index 0000000000..e800994a90 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/profile/profiles/rename.mdx @@ -0,0 +1,34 @@ +--- +page_title: hcp profile profiles rename +description: |- + The "hcp profile profiles rename" command lets you rename an existing profile. +--- + +# hcp profile profiles rename + +Command: `hcp profile profiles rename` + +The `hcp profile profiles rename` command renames an existing profile. + +## Usage + +```shell-session +$ hcp profile profiles rename NAME --new-name=NEW_NAME [Optional Flags] +``` + +## Examples + +To rename profile `my-profile` to `new-profile`, run: + +```shell-session +$ hcp profile profiles rename my-profile --new-name=new-profile +``` + +## Positional arguments + +- `NAME` - The name of the profile to rename. + +## Required flags + +- `--new-name=NEW_NAME` - Specifies the new name of the profile. + diff --git a/content/hcp-docs/content/docs/cli/commands/profile/set.mdx b/content/hcp-docs/content/docs/cli/commands/profile/set.mdx new file mode 100644 index 0000000000..731250739a --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/profile/set.mdx @@ -0,0 +1,67 @@ +--- +page_title: hcp profile set +description: |- + The "hcp profile set" command lets you set a HCP CLI Property. +--- + +# hcp profile set + +Command: `hcp profile set` + +The `hcp profile set` command sets the specified property in your active +profile. A property governs the behavior of a specific aspect of the HCP CLI. +This could be setting the organization and project to target, or configuring the +default output format across commands. + +To view all currently set properties, run `hcp profile display` or run `hcp +profile read` to read the value of an individual property. + +To unset properties, use `hcp profile unset`. + +HCP CLI comes with a default profile but supports multiple. To create multiple +configurations, use `hcp profile profiles create`, and `hcp profile profiles +activate` to switch between them. + +## Usage + +```shell-session +$ hcp profile set COMPONENT/PROPERTY VALUE [Optional Flags] +``` + +## Positional arguments + +- `COMPONENT/PROPERTY` - Property to be set. Note that `COMPONENT/` is optional when referring to + top-level profile fields, i.e., such as `organization_id` and `project_id`. + + Using component names is required for setting other properties like + `core/output_format`. Consult the Available Properties section below for a + comprehensive list of properties. + +- `VALUE` - Value to be set. + +## Available Properties +* `organization_id` + * Organization ID of the HCP organization to operate on. + +* `project_id` + * Project ID of the HCP project to operate on by default. This can be overridden + by using the global `--project` flag. + +* `core` + + * `no_color` - If True, color will not be used when printing messages in the terminal. + + * `output_format` - Default output format for `hcp` commands. This is the equivalent of using the + global `--format` flag. Supported output formats: `pretty`, `table`, and `json`. + + * `quiet` - If True, prompts will be disabled and output will be minimized. + + * `verbosity` - Default logging verbosity for `hcp` commands. This is the equivalent of using + the global `--verbosity` flag. Supported log levels: `trace`, `debug`, `info`, + `warn`, and `error`. + +* `vault-secrets` + + * `app` - HCP Vault Secrets application name to operate on by default. + + diff --git a/content/hcp-docs/content/docs/cli/commands/profile/unset.mdx b/content/hcp-docs/content/docs/cli/commands/profile/unset.mdx new file mode 100644 index 0000000000..6bfc6c4e79 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/profile/unset.mdx @@ -0,0 +1,56 @@ +--- +page_title: hcp profile unset +description: |- + The "hcp profile unset" command lets you unset a HCP CLI Property. +--- + +# hcp profile unset + +Command: `hcp profile unset` + +The `hcp profile unset` command unsets the specified property in your active +profile. + +To view all currently set properties, run `hcp profile display`. + +## Usage + +```shell-session +$ hcp profile unset COMPONENT/PROPERTY [Optional Flags] +``` + +## Positional arguments + +- `COMPONENT/PROPERTY` - Property to be unset. Note that `COMPONENT/` is optional when referring to + top-level profile fields, i.e., such as `organization_id` and `project_id`. + + Using component names is required for setting other properties like + `core/output_format`. Consult the Available Properties section below for a + comprehensive list of properties. + +## Available Properties +* `organization_id` + * Organization ID of the HCP organization to operate on. + +* `project_id` + * Project ID of the HCP project to operate on by default. This can be overridden + by using the global `--project` flag. + +* `core` + + * `no_color` - If True, color will not be used when printing messages in the terminal. + + * `output_format` - Default output format for `hcp` commands. This is the equivalent of using the + global `--format` flag. Supported output formats: `pretty`, `table`, and `json`. + + * `quiet` - If True, prompts will be disabled and output will be minimized. + + * `verbosity` - Default logging verbosity for `hcp` commands. This is the equivalent of using + the global `--verbosity` flag. Supported log levels: `trace`, `debug`, `info`, + `warn`, and `error`. + +* `vault-secrets` + + * `app` - HCP Vault Secrets application name to operate on by default. + + diff --git a/content/hcp-docs/content/docs/cli/commands/projects/create.mdx b/content/hcp-docs/content/docs/cli/commands/projects/create.mdx new file mode 100644 index 0000000000..553a473634 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/projects/create.mdx @@ -0,0 +1,38 @@ +--- +page_title: hcp projects create +description: |- + The "hcp projects create" command lets you create a new project. +--- + +# hcp projects create + +Command: `hcp projects create` + +The `hcp projects create` command creates a new project with the given name. +The currently authenticated principal will be given role "admin" on the newly +created project. + +## Usage + +```shell-session +$ hcp projects create NAME [Optional Flags] +``` + +## Examples + +Creating a project with a description: + +```shell-session +$ hcp projects create example-project --description="my test project" +``` + +## Positional arguments + +- `NAME` - Name of the project to create. + +## Flags + +- `--description=DESCRIPTION` - An optional description for the project. + +- `--set-as-default` - Set the newly created project as the default project in the active profile. + diff --git a/content/hcp-docs/content/docs/cli/commands/projects/delete.mdx b/content/hcp-docs/content/docs/cli/commands/projects/delete.mdx new file mode 100644 index 0000000000..de501a14c2 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/projects/delete.mdx @@ -0,0 +1,27 @@ +--- +page_title: hcp projects delete +description: |- + The "hcp projects delete" command lets you delete a project. +--- + +# hcp projects delete + +Command: `hcp projects delete` + +The `hcp projects delete` command deletes the specified project. The project +must be empty before it can be deleted. + +## Usage + +```shell-session +$ hcp projects delete [Optional Flags] +``` + +## Examples + +Delete a project: + +```shell-session +$ hcp projects delete --project=cd3d34d5-ceeb-493d-b004-9297365a01af +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/projects/iam/add-binding.mdx b/content/hcp-docs/content/docs/cli/commands/projects/iam/add-binding.mdx new file mode 100644 index 0000000000..88a3bd5673 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/projects/iam/add-binding.mdx @@ -0,0 +1,40 @@ +--- +page_title: hcp projects iam add-binding +description: |- + The "hcp projects iam add-binding" command lets you add an IAM policy binding for a project. +--- + +# hcp projects iam add-binding + +Command: `hcp projects iam add-binding` + +The `hcp projects iam add-binding` command adds an IAM policy binding for the +given project. A binding grants the specified principal the given role on the +project. + +To view the available roles to bind, run `hcp iam roles list`. + +## Usage + +```shell-session +$ hcp projects iam add-binding --member=PRINCIPAL_ID --role=ROLE_ID [Optional + Flags] +``` + +## Examples + +Bind a principal to role `roles/viewer`: + +```shell-session +$ hcp projects add-binding \ + --project=8647ae06-ca65-467a-b72d-edba1f908fc8 \ + --member=ef938a22-09cf-4be9-b4d0-1f4587f80f53 \ + --role=roles/viewer +``` + +## Required flags + +- `--member=PRINCIPAL_ID` - The ID of the principal to add the role binding to. + +- `--role=ROLE_ID` - The role ID (e.g. "roles/admin", "roles/contributor", "roles/viewer") to bind the member to. + diff --git a/content/hcp-docs/content/docs/cli/commands/projects/iam/delete-binding.mdx b/content/hcp-docs/content/docs/cli/commands/projects/iam/delete-binding.mdx new file mode 100644 index 0000000000..e8be3b7a67 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/projects/iam/delete-binding.mdx @@ -0,0 +1,39 @@ +--- +page_title: hcp projects iam delete-binding +description: |- + The "hcp projects iam delete-binding" command lets you delete an IAM policy binding for a project. +--- + +# hcp projects iam delete-binding + +Command: `hcp projects iam delete-binding` + +The `hcp projects iam delete-binding` command deletes an IAM policy binding for +the given project. A binding consists of a principal and a role. + +To view the existing role bindings, run `hcp projects iam read-policy`. + +## Usage + +```shell-session +$ hcp projects iam delete-binding --member=PRINCIPAL_ID --role=ROLE_ID [Optional + Flags] +``` + +## Examples + +Delete a role binding for a principal previously granted role `roles/viewer`: + +```shell-session +$ hcp projects iam delete-binding \ + --project=8647ae06-ca65-467a-b72d-edba1f908fc8 \ + --member=ef938a22-09cf-4be9-b4d0-1f4587f80f53 \ + --role=roles/viewer +``` + +## Required flags + +- `--member=PRINCIPAL_ID` - The ID of the principal to remove the role binding from. + +- `--role=ROLE_ID` - The role ID (e.g. "roles/admin", "roles/contributor", "roles/viewer") to remove the member from. + diff --git a/content/hcp-docs/content/docs/cli/commands/projects/iam/index.mdx b/content/hcp-docs/content/docs/cli/commands/projects/iam/index.mdx new file mode 100644 index 0000000000..f1681f9e16 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/projects/iam/index.mdx @@ -0,0 +1,25 @@ +--- +page_title: hcp projects iam +description: |- + The "hcp projects iam" command lets you manage a project's IAM policy. +--- + +# hcp projects iam + +Command: `hcp projects iam` + +The `hcp projects iam` command group lets you manage a project's IAM Policy. + +## Usage + +```shell-session +$ hcp projects iam [Optional Flags] +``` + +## Commands + +- [`add-binding`](/hcp/docs/cli/commands/projects/iam/add-binding) - Add an IAM policy binding for a project. +- [`delete-binding`](/hcp/docs/cli/commands/projects/iam/delete-binding) - Delete an IAM policy binding for a project. +- [`read-policy`](/hcp/docs/cli/commands/projects/iam/read-policy) - Read the IAM policy for a project. +- [`set-policy`](/hcp/docs/cli/commands/projects/iam/set-policy) - Set the IAM policy for a project. + diff --git a/content/hcp-docs/content/docs/cli/commands/projects/iam/read-policy.mdx b/content/hcp-docs/content/docs/cli/commands/projects/iam/read-policy.mdx new file mode 100644 index 0000000000..136f15d07e --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/projects/iam/read-policy.mdx @@ -0,0 +1,27 @@ +--- +page_title: hcp projects iam read-policy +description: |- + The "hcp projects iam read-policy" command lets you read the IAM policy for a project. +--- + +# hcp projects iam read-policy + +Command: `hcp projects iam read-policy` + +The `hcp projects iam read-policy` command reads the IAM policy for a project. + +## Usage + +```shell-session +$ hcp projects iam read-policy [Optional Flags] +``` + +## Examples + +Read the IAM Policy for a project: + +```shell-session +$ hcp projects iam read-policy \ + --project=8647ae06-ca65-467a-b72d-edba1f908fc8 +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/projects/iam/set-policy.mdx b/content/hcp-docs/content/docs/cli/commands/projects/iam/set-policy.mdx new file mode 100644 index 0000000000..e97c0a5c9b --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/projects/iam/set-policy.mdx @@ -0,0 +1,90 @@ +--- +page_title: hcp projects iam set-policy +description: |- + The "hcp projects iam set-policy" command lets you set the IAM policy for a project. +--- + +# hcp projects iam set-policy + +Command: `hcp projects iam set-policy` + +The `hcp projects iam set-policy` command sets the IAM policy for the project, +given a project ID and a file encoded in JSON that contains the IAM policy. If +adding or removing a single principal from the policy, prefer using `hcp +projects iam add-binding` and the related `hcp projects iam delete-binding`. + +The policy file is expected to be a file encoded in JSON that contains the IAM +policy. + +The format for the policy JSON file is an object with the following format: + +```json +{ + "bindings": [ + { + "role_id": "ROLE_ID", + "members": [ + { + "member_id": "PRINCIPAL_ID", + "member_type": "USER" | "GROUP" | "SERVICE_PRINCIPAL" + } + ] + } + ], + "etag": "ETAG" +} +``` + +If set, the etag of the policy must be equal to that of the existing policy. To +view the existing policy and its etag, run `hcp projects iam read-policy +--format=json`. If unset, the existing policy's etag will be fetched and used. + +## Usage + +```shell-session +$ hcp projects iam set-policy --policy-file=PATH [Optional Flags] +``` + +## Examples + +Set the IAM Policy for a project: + +```shell-session +$ cat >policy.json < [Optional Flags] +``` + +## Command groups + +- [`iam`](/hcp/docs/cli/commands/projects/iam) - Manage a project's IAM policy. + +## Commands + +- [`create`](/hcp/docs/cli/commands/projects/create) - Create a new project. +- [`read`](/hcp/docs/cli/commands/projects/read) - Show metadata for the project. +- [`list`](/hcp/docs/cli/commands/projects/list) - List HCP projects. +- [`delete`](/hcp/docs/cli/commands/projects/delete) - Delete a project. +- [`update`](/hcp/docs/cli/commands/projects/update) - Update an existing project. + diff --git a/content/hcp-docs/content/docs/cli/commands/projects/list.mdx b/content/hcp-docs/content/docs/cli/commands/projects/list.mdx new file mode 100644 index 0000000000..950d2a156d --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/projects/list.mdx @@ -0,0 +1,18 @@ +--- +page_title: hcp projects list +description: |- + The "hcp projects list" command lets you list HCP projects. +--- + +# hcp projects list + +Command: `hcp projects list` + +The `hcp projects list` command lists HCP projects. + +## Usage + +```shell-session +$ hcp projects list [Optional Flags] +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/projects/read.mdx b/content/hcp-docs/content/docs/cli/commands/projects/read.mdx new file mode 100644 index 0000000000..628c5b5e5c --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/projects/read.mdx @@ -0,0 +1,26 @@ +--- +page_title: hcp projects read +description: |- + The "hcp projects read" command lets you show metadata for the project. +--- + +# hcp projects read + +Command: `hcp projects read` + +The `hcp projects read` command shows metadata for the project. + +## Usage + +```shell-session +$ hcp projects read [Optional Flags] +``` + +## Examples + +Read a project: + +```shell-session +$ hcp projects read --project=cd3d34d5-ceeb-493d-b004-9297365a01af +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/projects/update.mdx b/content/hcp-docs/content/docs/cli/commands/projects/update.mdx new file mode 100644 index 0000000000..da88cad34e --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/projects/update.mdx @@ -0,0 +1,33 @@ +--- +page_title: hcp projects update +description: |- + The "hcp projects update" command lets you update an existing project. +--- + +# hcp projects update + +Command: `hcp projects update` + +The `hcp projects update` command shows metadata for the project. + +## Usage + +```shell-session +$ hcp projects update [Optional Flags] +``` + +## Examples + +Update a project's name and description: + +```shell-session +$ hcp projects update --project=cd3d34d5-ceeb-493d-b004-9297365a01af \ + --name=new-name --description="updated description" +``` + +## Flags + +- `--description=NEW_DESCRIPTION` - New description for the project. + +- `--name=NEW_NAME` - New name for the project. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/create.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/create.mdx new file mode 100644 index 0000000000..a07561ebef --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/create.mdx @@ -0,0 +1,36 @@ +--- +page_title: hcp vault-secrets apps create +description: |- + The "hcp vault-secrets apps create" command lets you create a new Vault Secrets application. +--- + +# hcp vault-secrets apps create + +Command: `hcp vault-secrets apps create` + +The `hcp vault-secrets apps create` command creates a new Vault Secrets +application. + +## Usage + +```shell-session +$ hcp vault-secrets apps create NAME [Optional Flags] +``` + +## Examples + +Create a new application: + +```shell-session +$ hcp vault-secrets apps create company-card \ + --description "Stores corporate card info." +``` + +## Positional arguments + +- `NAME` - The name of the app to create. + +## Flags + +- `--description=DESCRIPTION` - An optional description for the app to create. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/delete.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/delete.mdx new file mode 100644 index 0000000000..234c11ec9a --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/delete.mdx @@ -0,0 +1,31 @@ +--- +page_title: hcp vault-secrets apps delete +description: |- + The "hcp vault-secrets apps delete" command lets you delete a Vault Secrets application. +--- + +# hcp vault-secrets apps delete + +Command: `hcp vault-secrets apps delete` + +The `hcp vault-secrets apps delete` command deletes a Vault Secrets +application. + +## Usage + +```shell-session +$ hcp vault-secrets apps delete NAME [Optional Flags] +``` + +## Examples + +Delete an application: + +```shell-session +$ hcp vault-secrets apps delete company-card +``` + +## Positional arguments + +- `NAME` - The name of the app to delete. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/index.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/index.mdx new file mode 100644 index 0000000000..285b147ac1 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/index.mdx @@ -0,0 +1,27 @@ +--- +page_title: hcp vault-secrets apps +description: |- + The "hcp vault-secrets apps" command lets you manage Vault Secrets apps. +--- + +# hcp vault-secrets apps + +Command: `hcp vault-secrets apps` + +The `hcp vault-secrets apps` command group lets you manage Vault Secrets +applications. + +## Usage + +```shell-session +$ hcp vault-secrets apps [Optional Flags] +``` + +## Commands + +- [`create`](/hcp/docs/cli/commands/vault-secrets/apps/create) - Create a new Vault Secrets application. +- [`delete`](/hcp/docs/cli/commands/vault-secrets/apps/delete) - Delete a Vault Secrets application. +- [`read`](/hcp/docs/cli/commands/vault-secrets/apps/read) - Read a Vault Secrets application. +- [`list`](/hcp/docs/cli/commands/vault-secrets/apps/list) - List Vault Secrets applications. +- [`update`](/hcp/docs/cli/commands/vault-secrets/apps/update) - Update a Vault Secrets application. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/list.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/list.mdx new file mode 100644 index 0000000000..1878d19cbf --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/list.mdx @@ -0,0 +1,26 @@ +--- +page_title: hcp vault-secrets apps list +description: |- + The "hcp vault-secrets apps list" command lets you list Vault Secrets applications. +--- + +# hcp vault-secrets apps list + +Command: `hcp vault-secrets apps list` + +The `hcp vault-secrets apps list` command list all Vault Secrets applications. + +## Usage + +```shell-session +$ hcp vault-secrets apps list [Optional Flags] +``` + +## Examples + +List applications: + +```shell-session +$ hcp vault-secrets apps list +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/read.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/read.mdx new file mode 100644 index 0000000000..11e7d376f8 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/read.mdx @@ -0,0 +1,30 @@ +--- +page_title: hcp vault-secrets apps read +description: |- + The "hcp vault-secrets apps read" command lets you read a Vault Secrets application. +--- + +# hcp vault-secrets apps read + +Command: `hcp vault-secrets apps read` + +The `hcp vault-secrets apps read` command gets a Vault Secrets application. + +## Usage + +```shell-session +$ hcp vault-secrets apps read NAME [Optional Flags] +``` + +## Examples + +Read an application: + +```shell-session +$ hcp vault-secrets apps read company-card +``` + +## Positional arguments + +- `NAME` - The name of the app to read. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/update.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/update.mdx new file mode 100644 index 0000000000..dc022102ea --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/apps/update.mdx @@ -0,0 +1,35 @@ +--- +page_title: hcp vault-secrets apps update +description: |- + The "hcp vault-secrets apps update" command lets you update a Vault Secrets application. +--- + +# hcp vault-secrets apps update + +Command: `hcp vault-secrets apps update` + +The `hcp vault-secrets apps update` command updates the description of a Vault +Secrets application. + +## Usage + +```shell-session +$ hcp vault-secrets apps update NAME --description=DESCRIPTION [Optional Flags] +``` + +## Examples + +Update an application: + +```shell-session +$ hcp vault-secrets apps update company-card --description "Visa card info" +``` + +## Positional arguments + +- `NAME` - The name of the app to update. + +## Required flags + +- `--description=DESCRIPTION` - The updated app description. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/create.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/create.mdx new file mode 100644 index 0000000000..bc2c310f40 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/create.mdx @@ -0,0 +1,40 @@ +--- +page_title: hcp vault-secrets gateway-pools create +description: |- + The "hcp vault-secrets gateway-pools create" command lets you create a new Vault Secrets Gateway Pool. +--- + +# hcp vault-secrets gateway-pools create + +Command: `hcp vault-secrets gateway-pools create` + +The `hcp vault-secrets gateway-pools create` command creates a new Vault +Secrets gateway pool. + +## Usage + +```shell-session +$ hcp vault-secrets gateway-pools create NAME [Optional Flags] +``` + +## Examples + +Create a new gateway pool: + +```shell-session +$ hcp vault-secrets gateway-pools create company-tunnel \ + --description "Tunnels to corporate network." +``` + +## Positional arguments + +- `NAME` - The name of the gateway pool to create. + +## Flags + +- `--description=DESCRIPTION` - An optional description for the gateway pool to create. + +- `-o, --output-dir=OUTPUT_DIR_PATH` - Directory path where the gateway credentials file and config file should be written. + +- `-s, --show-client-secret=SHOW_CLIENT_SECRET` - Show the client secret in the output. If this is not set, OUTPUT_DIR_PATH should be set. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/delete.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/delete.mdx new file mode 100644 index 0000000000..6fa78ab690 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/delete.mdx @@ -0,0 +1,31 @@ +--- +page_title: hcp vault-secrets gateway-pools delete +description: |- + The "hcp vault-secrets gateway-pools delete" command lets you delete a Vault Secrets gateway pool. +--- + +# hcp vault-secrets gateway-pools delete + +Command: `hcp vault-secrets gateway-pools delete` + +The `hcp vault-secrets gateway-pools delete` command deletes a Vault Secrets +gateway pool. + +## Usage + +```shell-session +$ hcp vault-secrets gateway-pools delete NAME [Optional Flags] +``` + +## Examples + +Delete a gateway pool: + +```shell-session +$ hcp vault-secrets gateway-pools delete company-tunnel +``` + +## Positional arguments + +- `NAME` - The name of the gateway pool to delete. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/index.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/index.mdx new file mode 100644 index 0000000000..870dec2b97 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/index.mdx @@ -0,0 +1,28 @@ +--- +page_title: hcp vault-secrets gateway-pools +description: |- + The "hcp vault-secrets gateway-pools" command lets you manage Vault Secrets gateway pools. +--- + +# hcp vault-secrets gateway-pools + +Command: `hcp vault-secrets gateway-pools` + +The `hcp vault-secrets gateway-pools` command group lets you manage Vault +Secrets gateway pools. + +## Usage + +```shell-session +$ hcp vault-secrets gateway-pools [Optional Flags] +``` + +## Commands + +- [`list`](/hcp/docs/cli/commands/vault-secrets/gateway-pools/list) - List Vault Secrets gateway pools. +- [`create`](/hcp/docs/cli/commands/vault-secrets/gateway-pools/create) - Create a new Vault Secrets Gateway Pool. +- [`update`](/hcp/docs/cli/commands/vault-secrets/gateway-pools/update) - Update a Vault Secrets gateway pool. +- [`delete`](/hcp/docs/cli/commands/vault-secrets/gateway-pools/delete) - Delete a Vault Secrets gateway pool. +- [`read`](/hcp/docs/cli/commands/vault-secrets/gateway-pools/read) - Read a Vault Secrets gateway pool. +- [`list-gateways`](/hcp/docs/cli/commands/vault-secrets/gateway-pools/list-gateways) - List Vault Secrets gateway pools gateways. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/list-gateways.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/list-gateways.mdx new file mode 100644 index 0000000000..6cff6f2268 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/list-gateways.mdx @@ -0,0 +1,35 @@ +--- +page_title: hcp vault-secrets gateway-pools list-gateways +description: |- + The "hcp vault-secrets gateway-pools list-gateways" command lets you list Vault Secrets gateway pools gateways. +--- + +# hcp vault-secrets gateway-pools list-gateways + +Command: `hcp vault-secrets gateway-pools list-gateways` + +The `hcp vault-secrets gateway-pools list-gateways` command lists all Vault +Secrets gateway pools gateways. + +## Usage + +```shell-session +$ hcp vault-secrets gateway-pools list-gateways NAME [Optional Flags] +``` + +## Examples + +List gateway-pools gateways: + +```shell-session +$ hcp vault-secrets gateway-pools list-gateways company-tunnel +``` + +## Positional arguments + +- `NAME` - The name of the gateway pool to list its gateways. + +## Flags + +- `-a, --show-all` - Show all fields. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/list.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/list.mdx new file mode 100644 index 0000000000..f9e4bb2ff0 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/list.mdx @@ -0,0 +1,27 @@ +--- +page_title: hcp vault-secrets gateway-pools list +description: |- + The "hcp vault-secrets gateway-pools list" command lets you list Vault Secrets gateway pools. +--- + +# hcp vault-secrets gateway-pools list + +Command: `hcp vault-secrets gateway-pools list` + +The `hcp vault-secrets gateway-pools list` command lists all Vault Secrets +gateway pools. + +## Usage + +```shell-session +$ hcp vault-secrets gateway-pools list [Optional Flags] +``` + +## Examples + +List gateway-pools: + +```shell-session +$ hcp vault-secrets gateway-pools list +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/read.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/read.mdx new file mode 100644 index 0000000000..c45782baf2 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/read.mdx @@ -0,0 +1,31 @@ +--- +page_title: hcp vault-secrets gateway-pools read +description: |- + The "hcp vault-secrets gateway-pools read" command lets you read a Vault Secrets gateway pool. +--- + +# hcp vault-secrets gateway-pools read + +Command: `hcp vault-secrets gateway-pools read` + +The `hcp vault-secrets gateway-pools read` command gets a Vault Secrets gateway +pool. + +## Usage + +```shell-session +$ hcp vault-secrets gateway-pools read NAME [Optional Flags] +``` + +## Examples + +Read a gateway pool: + +```shell-session +$ hcp vault-secrets gateway-pools read company-tunnel +``` + +## Positional arguments + +- `NAME` - The name of the gateway pool to read. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/update.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/update.mdx new file mode 100644 index 0000000000..bc23b1a243 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/gateway-pools/update.mdx @@ -0,0 +1,36 @@ +--- +page_title: hcp vault-secrets gateway-pools update +description: |- + The "hcp vault-secrets gateway-pools update" command lets you update a Vault Secrets gateway pool. +--- + +# hcp vault-secrets gateway-pools update + +Command: `hcp vault-secrets gateway-pools update` + +The `hcp vault-secrets gateway-pools update` command updates a Vault Secrets +gateway pool. + +## Usage + +```shell-session +$ hcp vault-secrets gateway-pools update NAME [Optional Flags] +``` + +## Examples + +Update a gateway pool: + +```shell-session +$ hcp vault-secrets gateway-pools update company-tunnel --description "Extra +secure tunnel for company secrets." +``` + +## Positional arguments + +- `NAME` - The name of the gateway pool to update. + +## Flags + +- `--description=DESCRIPTION` - The updated gateway pool description. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/index.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/index.mdx new file mode 100644 index 0000000000..7fe209ddc6 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/index.mdx @@ -0,0 +1,34 @@ +--- +page_title: hcp vault-secrets +description: |- + The "hcp vault-secrets" command lets you manage Vault Secrets. +--- + +# hcp vault-secrets + +Command: `hcp vault-secrets` + +The `hcp vault-secrets` command group lets you manage Vault Secrets resources +through the CLI. + +## Usage + +```shell-session +$ hcp vault-secrets [Optional Flags] +``` + +## Aliases + +- `vs`. For example: `hcp vs ` + +## Command groups + +- [`apps`](/hcp/docs/cli/commands/vault-secrets/apps) - Manage Vault Secrets apps. +- [`integrations`](/hcp/docs/cli/commands/vault-secrets/integrations) - Manage Vault Secrets integrations. +- [`secrets`](/hcp/docs/cli/commands/vault-secrets/secrets) - Manage Vault Secrets application secrets. +- [`gateway-pools`](/hcp/docs/cli/commands/vault-secrets/gateway-pools) - Manage Vault Secrets gateway pools. + +## Commands + +- [`run`](/hcp/docs/cli/commands/vault-secrets/run) - Run a process with secrets from a Vault Secrets app. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/create.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/create.mdx new file mode 100644 index 0000000000..0e2e482ad4 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/create.mdx @@ -0,0 +1,42 @@ +--- +page_title: hcp vault-secrets integrations create +description: |- + The "hcp vault-secrets integrations create" command lets you create a new integration. +--- + +# hcp vault-secrets integrations create + +Command: `hcp vault-secrets integrations create` + +The `hcp vault-secrets integrations create` command creates a new Vault Secrets +integration. When the `--config-file` flag is specified, the configuration for +your integration will be read from the provided HCL config file. The following +fields are required: [type details]. For help populating the details for an +integration type, please refer to the [API reference +documentation](https://developer.hashicorp.com/hcp/api-docs/vault-secrets/2023-11-28). +When the `--config-file` flag is not specified, you will be prompted to create +the integration interactively. + +## Usage + +```shell-session +$ hcp vault-secrets integrations create NAME [Optional Flags] +``` + +## Examples + +Create a new Vault Secrets integration: + +```shell-session +$ hcp vault-secrets integrations create sample-integration +--config-file=path-to-file/config.hcl +``` + +## Positional arguments + +- `NAME` - The name of the integration to create. + +## Flags + +- `--config-file=CONFIG_FILE` - File path to read integration config data. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/delete.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/delete.mdx new file mode 100644 index 0000000000..7741063715 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/delete.mdx @@ -0,0 +1,36 @@ +--- +page_title: hcp vault-secrets integrations delete +description: |- + The "hcp vault-secrets integrations delete" command lets you delete a Vault Secrets integration. +--- + +# hcp vault-secrets integrations delete + +Command: `hcp vault-secrets integrations delete` + +The `hcp vault-secrets integrations delete` command deletes a Vault Secrets +integration. The required `--type` flag may be any of the following: +[mongodb-atlas aws gcp twilio] + +## Usage + +```shell-session +$ hcp vault-secrets integrations delete NAME --type=TYPE [Optional Flags] +``` + +## Examples + +Delete an integration: + +```shell-session +$ hcp vault-secrets integrations delete sample-integration --type twilio +``` + +## Positional arguments + +- `NAME` - The name of the integration to delete. + +## Required flags + +- `--type=TYPE` - The type of the integration to delete. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/index.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/index.mdx new file mode 100644 index 0000000000..5c5773853f --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/index.mdx @@ -0,0 +1,27 @@ +--- +page_title: hcp vault-secrets integrations +description: |- + The "hcp vault-secrets integrations" command lets you manage Vault Secrets integrations. +--- + +# hcp vault-secrets integrations + +Command: `hcp vault-secrets integrations` + +The `hcp vault-secrets integrations` command group lets you manage Vault +Secrets integrations. + +## Usage + +```shell-session +$ hcp vault-secrets integrations [Optional Flags] +``` + +## Commands + +- [`read`](/hcp/docs/cli/commands/vault-secrets/integrations/read) - Read a Vault Secrets integration. +- [`delete`](/hcp/docs/cli/commands/vault-secrets/integrations/delete) - Delete a Vault Secrets integration. +- [`list`](/hcp/docs/cli/commands/vault-secrets/integrations/list) - List Vault Secrets integrations. +- [`create`](/hcp/docs/cli/commands/vault-secrets/integrations/create) - Create a new integration. +- [`update`](/hcp/docs/cli/commands/vault-secrets/integrations/update) - Update an integration. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/list.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/list.mdx new file mode 100644 index 0000000000..0df44fd664 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/list.mdx @@ -0,0 +1,38 @@ +--- +page_title: hcp vault-secrets integrations list +description: |- + The "hcp vault-secrets integrations list" command lets you list Vault Secrets integrations. +--- + +# hcp vault-secrets integrations list + +Command: `hcp vault-secrets integrations list` + +The `hcp vault-secrets integrations list` command lists Vault Secrets generic +integrations. The optional `--type` flag may be any of the following: +[mongodb-atlas aws gcp twilio] + +## Usage + +```shell-session +$ hcp vault-secrets integrations list [Optional Flags] +``` + +## Examples + +List twilio integrations: + +```shell-session +$ hcp vault-secrets integrations list --type "twilio" +``` + +List all generic integrations: + +```shell-session +$ hcp vault-secrets integrations list +``` + +## Flags + +- `--type=TYPE` - The optional type of integration to list. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/read.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/read.mdx new file mode 100644 index 0000000000..a9701868ce --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/read.mdx @@ -0,0 +1,35 @@ +--- +page_title: hcp vault-secrets integrations read +description: |- + The "hcp vault-secrets integrations read" command lets you read a Vault Secrets integration. +--- + +# hcp vault-secrets integrations read + +Command: `hcp vault-secrets integrations read` + +The `hcp vault-secrets integrations read` command gets a Vault Secrets +integration. + +## Usage + +```shell-session +$ hcp vault-secrets integrations read NAME [Optional Flags] +``` + +## Examples + +Read an integration: + +```shell-session +$ hcp vault-secrets integrations read sample-integration --type twilio +``` + +## Positional arguments + +- `NAME` - The name of the integration to read. + +## Flags + +- `--type=TYPE` - The type of the integration to read. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/update.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/update.mdx new file mode 100644 index 0000000000..77337030d9 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/integrations/update.mdx @@ -0,0 +1,41 @@ +--- +page_title: hcp vault-secrets integrations update +description: |- + The "hcp vault-secrets integrations update" command lets you update an integration. +--- + +# hcp vault-secrets integrations update + +Command: `hcp vault-secrets integrations update` + +The `hcp vault-secrets integrations update` command updates a Vault Secrets +integration. The configuration for updating your integration will be read from +the provided HCL config file. The following fields are required: [type +details]. For help populating the details for an integration type, please refer +to the [API reference +documentation](https://developer.hashicorp.com/hcp/api-docs/vault-secrets/2023-11-28). + +## Usage + +```shell-session +$ hcp vault-secrets integrations update NAME --config-file=CONFIG_FILE [Optional + Flags] +``` + +## Examples + +Update a Vault Secrets integration: + +```shell-session +$ hcp vault-secrets integrations update sample-integration +--config-file=path-to-file/config.hcl +``` + +## Positional arguments + +- `NAME` - The name of the integration to update. + +## Required flags + +- `--config-file=CONFIG_FILE` - File path to read integration config data. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/run.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/run.mdx new file mode 100644 index 0000000000..77d3071c69 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/run.mdx @@ -0,0 +1,43 @@ +--- +page_title: hcp vault-secrets run +description: |- + The "hcp vault-secrets run" command lets you run a process with secrets from a Vault Secrets app. +--- + +# hcp vault-secrets run + +Command: `hcp vault-secrets run` + +The `hcp vault-secrets run` command lets you run the provided command as a +child process while injecting all of the app's secrets as environment variables, +with all secret names converted to upper-case. STDIN, STDOUT, and STDERR will be +passed to the created child process. + +## Usage + +```shell-session +$ hcp vault-secrets run COMMAND [Optional Flags] +``` + +## Examples + +Display your current environment with app secrets included: + +```shell-session +$ hcp vault-secrets run 'env' +``` + +Inject secrets as environment variables: + +```shell-session +$ hcp vault-secrets run --app=my-app -- go run main.go --duration=1m +``` + +## Positional arguments + +- `COMMAND` - Defines the invocation of the child process to inject secrets to. + +## Flags + +- `--app=NAME` - The application you want to pull all secrets from. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/create.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/create.mdx new file mode 100644 index 0000000000..c4e055f46c --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/create.mdx @@ -0,0 +1,65 @@ +--- +page_title: hcp vault-secrets secrets create +description: |- + The "hcp vault-secrets secrets create" command lets you create a new secret. +--- + +# hcp vault-secrets secrets create + +Command: `hcp vault-secrets secrets create` + +The `hcp vault-secrets secrets create` command creates a new static, rotating, +or dynamic secret under a Vault Secrets application. The configuration for +creating your rotating or dynamic secret will be read from the provided HCL +config file. The following fields are required in the config file: [type +integration_name details]. For help populating the details for a dynamic or +rotating secret, please refer to the [API reference +documentation](https://developer.hashicorp.com/hcp/api-docs/vault-secrets/2023-11-28). + +## Usage + +```shell-session +$ hcp vault-secrets secrets create NAME [Optional Flags] +``` + +## Examples + +Create a new static secret in the Vault Secrets application on your active profile: + +```shell-session +$ hcp vault-secrets secrets create secret_1 --data-file=tmp/secrets1.txt +``` + +Create a new secret in a Vault Secrets application by piping the plaintext secret from a command output: + +```shell-session +$ echo -n "my super secret" | hcp vault-secrets secrets create secret_2 --data-file=- +``` + +Create a new rotating secret on your active profile from a config file: + +```shell-session +$ hcp vault-secrets secrets create secret_1 --secret-type=rotating +--data-file=path/to/file/config.hcl +``` + +Create a new dynamic secret interactively on your active profile: + +```shell-session +$ hcp vault-secrets secrets create secret_1 --secret-type=dynamic +``` + +## Positional arguments + +- `NAME` - The name of the secret to create. + +## Flags + +- `--data-file=DATA_FILE_PATH` - File path to read secret data from. Set this to '-' to read the secret data from stdin for a static secret. + +- `--secret-type=SECRET_TYPE` - The type of secret to create: static, rotating, or dynamic. + +## Inherited Flags + +- `--app=NAME` - The name of the Vault Secrets application. If not specified, the value from the active profile will be used. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/delete.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/delete.mdx new file mode 100644 index 0000000000..f2a0ff8a17 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/delete.mdx @@ -0,0 +1,41 @@ +--- +page_title: hcp vault-secrets secrets delete +description: |- + The "hcp vault-secrets secrets delete" command lets you delete a static secret. +--- + +# hcp vault-secrets secrets delete + +Command: `hcp vault-secrets secrets delete` + +The `hcp vault-secrets secrets delete` command deletes a static secret under an +Vault Secrets application. + +## Usage + +```shell-session +$ hcp vault-secrets secrets delete NAME [Optional Flags] +``` + +## Examples + +Delete a secret from Vault Secrets application on active profile: + +```shell-session +$ hcp vault-secrets secrets delete secret_1 +``` + +Delete a secret from specified Vault Secrets application: + +```shell-session +$ hcp vault-secrets secrets delete secret_2 --app test-app +``` + +## Positional arguments + +- `NAME` - The name of the secret to create. + +## Inherited Flags + +- `--app=NAME` - The name of the Vault Secrets application. If not specified, the value from the active profile will be used. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/index.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/index.mdx new file mode 100644 index 0000000000..72b09370bf --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/index.mdx @@ -0,0 +1,37 @@ +--- +page_title: hcp vault-secrets secrets +description: |- + The "hcp vault-secrets secrets" command lets you manage Vault Secrets application secrets. +--- + +# hcp vault-secrets secrets + +Command: `hcp vault-secrets secrets` + +The `hcp vault-secrets secrets` command group lets you manage Vault Secrets +application secrets. + +## Usage + +```shell-session +$ hcp vault-secrets secrets [Optional Flags] +``` + +## Aliases + +- `s`. For example: `hcp vault-secrets s ` + +## Command groups + +- [`versions`](/hcp/docs/cli/commands/vault-secrets/secrets/versions) - Manage Vault Secrets application secret's versions. + +## Commands + +- [`create`](/hcp/docs/cli/commands/vault-secrets/secrets/create) - Create a new secret. +- [`read`](/hcp/docs/cli/commands/vault-secrets/secrets/read) - Read a secret's metadata. +- [`delete`](/hcp/docs/cli/commands/vault-secrets/secrets/delete) - Delete a static secret. +- [`list`](/hcp/docs/cli/commands/vault-secrets/secrets/list) - List an application's secrets. +- [`open`](/hcp/docs/cli/commands/vault-secrets/secrets/open) - Open a secret. +- [`rotate`](/hcp/docs/cli/commands/vault-secrets/secrets/rotate) - Rotate a rotating secret. +- [`update`](/hcp/docs/cli/commands/vault-secrets/secrets/update) - Update an existing dynamic or rotating secret. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/list.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/list.mdx new file mode 100644 index 0000000000..7c6c9be3c7 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/list.mdx @@ -0,0 +1,40 @@ +--- +page_title: hcp vault-secrets secrets list +description: |- + The "hcp vault-secrets secrets list" command lets you list an application's secrets. +--- + +# hcp vault-secrets secrets list + +Command: `hcp vault-secrets secrets list` + +The `hcp vault-secrets secrets list` command list all secrets under a Vault +Secrets application. + +Individual secrets can be read using `hcp vault-secrets secrets read` +subcommand. + +## Usage + +```shell-session +$ hcp vault-secrets secrets list [Optional Flags] +``` + +## Examples + +List all secrets under the Vault Secrets application on active profile: + +```shell-session +$ hcp vault-secrets secrets list +``` + +List all secrets under the specified Vault Secrets application: + +```shell-session +$ hcp vault-secrets secrets list --app test-app +``` + +## Inherited Flags + +- `--app=NAME` - The name of the Vault Secrets application. If not specified, the value from the active profile will be used. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/open.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/open.mdx new file mode 100644 index 0000000000..eaa6fe1060 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/open.mdx @@ -0,0 +1,39 @@ +--- +page_title: hcp vault-secrets secrets open +description: |- + The "hcp vault-secrets secrets open" command lets you open a secret. +--- + +# hcp vault-secrets secrets open + +Command: `hcp vault-secrets secrets open` + +The `hcp vault-secrets secrets open` command reads the plaintext value of a +static, rotating, or dynamic secret from the Vault Secrets application. + +## Usage + +```shell-session +$ hcp vault-secrets secrets open NAME [Optional Flags] +``` + +## Examples + +Open plaintext secret: + +```shell-session +$ hcp vault-secrets secret open "test_secret" +``` + +## Positional arguments + +- `NAME` - The name of the secret to open. + +## Flags + +- `-o, --out-file=OUTPUT_FILE_PATH` - File path where the secret value should be written. + +## Inherited Flags + +- `--app=NAME` - The name of the Vault Secrets application. If not specified, the value from the active profile will be used. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/read.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/read.mdx new file mode 100644 index 0000000000..96e00b9be7 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/read.mdx @@ -0,0 +1,41 @@ +--- +page_title: hcp vault-secrets secrets read +description: |- + The "hcp vault-secrets secrets read" command lets you read a secret's metadata. +--- + +# hcp vault-secrets secrets read + +Command: `hcp vault-secrets secrets read` + +The `hcp vault-secrets secrets read` command reads a static, rotating, or +dynamic secret's metadata from the Vault Secrets application. + +## Usage + +```shell-session +$ hcp vault-secrets secrets read NAME [Optional Flags] +``` + +## Examples + +Read a secret's metadata: + +```shell-session +$ hcp vault-secrets secret read "test_secret" +``` + +Read a secret's metadata from under a specified Vault Secrets application: + +```shell-session +$ hcp vault-secrets secret read "test_secret" --app test-app +``` + +## Positional arguments + +- `NAME` - The name of the secret to read. + +## Inherited Flags + +- `--app=NAME` - The name of the Vault Secrets application. If not specified, the value from the active profile will be used. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/rotate.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/rotate.mdx new file mode 100644 index 0000000000..d751036c83 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/rotate.mdx @@ -0,0 +1,41 @@ +--- +page_title: hcp vault-secrets secrets rotate +description: |- + The "hcp vault-secrets secrets rotate" command lets you rotate a rotating secret. +--- + +# hcp vault-secrets secrets rotate + +Command: `hcp vault-secrets secrets rotate` + +The `hcp vault-secrets secrets rotate` command rotates a rotating secret from +the Vault Secrets application. + +## Usage + +```shell-session +$ hcp vault-secrets secrets rotate NAME [Optional Flags] +``` + +## Examples + +Rotate a secret: + +```shell-session +$ hcp vault-secrets secret rotate "test_secret" +``` + +Rotate a secret under the specified Vault Secrets application: + +```shell-session +$ hcp vault-secrets secret rotate "test_secret" --app test-app +``` + +## Positional arguments + +- `NAME` - The name of the secret to rotate. + +## Inherited Flags + +- `--app=NAME` - The name of the Vault Secrets application. If not specified, the value from the active profile will be used. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/update.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/update.mdx new file mode 100644 index 0000000000..67fdb650fb --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/update.mdx @@ -0,0 +1,50 @@ +--- +page_title: hcp vault-secrets secrets update +description: |- + The "hcp vault-secrets secrets update" command lets you update an existing dynamic or rotating secret. +--- + +# hcp vault-secrets secrets update + +Command: `hcp vault-secrets secrets update` + +The `hcp vault-secrets secrets update` command updates an existing rotating +or dynamic secret under a Vault Secrets application. The configuration for +updating your rotating or dynamic secret will be read from the provided HCL +config file. The following fields are required in the config file: [type +details]. For help populating the details for a dynamic or rotating secret, +please refer to the [API reference +documentation](https://developer.hashicorp.com/hcp/api-docs/vault-secrets/2023-11-28). + +## Usage + +```shell-session +$ hcp vault-secrets secrets update NAME --data-file=DATA_FILE_PATH [Optional + Flags] +``` + +## Examples + +Update a rotating secret in the Vault Secrets application on your active profile: + +```shell-session +$ hcp vault-secrets secrets update secret_1 --secret-type=rotating +--data-file=tmp/secrets1.txt +``` + +## Positional arguments + +- `NAME` - The name of the secret to update. + +## Required flags + +- `--data-file=DATA_FILE_PATH` - File path to read secret data from. Set this to '-' to read the secret data from stdin. + +## Optional flags + +- `--secret-type=SECRET_TYPE` - The type of secret to update: rotating or dynamic. + +## Inherited Flags + +- `--app=NAME` - The name of the Vault Secrets application. If not specified, the value from the active profile will be used. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/versions/index.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/versions/index.mdx new file mode 100644 index 0000000000..ab28524250 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/versions/index.mdx @@ -0,0 +1,23 @@ +--- +page_title: hcp vault-secrets secrets versions +description: |- + The "hcp vault-secrets secrets versions" command lets you manage Vault Secrets application secret's versions. +--- + +# hcp vault-secrets secrets versions + +Command: `hcp vault-secrets secrets versions` + +The `hcp vault-secrets secrets versions` command group lets you manage a +secret's versions. + +## Usage + +```shell-session +$ hcp vault-secrets secrets versions [Optional Flags] +``` + +## Commands + +- [`list`](/hcp/docs/cli/commands/vault-secrets/secrets/versions/list) - List a secret's versions. + diff --git a/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/versions/list.mdx b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/versions/list.mdx new file mode 100644 index 0000000000..aae86c81b0 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/vault-secrets/secrets/versions/list.mdx @@ -0,0 +1,41 @@ +--- +page_title: hcp vault-secrets secrets versions list +description: |- + The "hcp vault-secrets secrets versions list" command lets you list a secret's versions. +--- + +# hcp vault-secrets secrets versions list + +Command: `hcp vault-secrets secrets versions list` + +The `hcp vault-secrets secrets versions list` command lists all versions for a +secret under a Vault Secrets application. + +## Usage + +```shell-session +$ hcp vault-secrets secrets versions list NAME [Optional Flags] +``` + +## Examples + +List all versions of a secret under the Vault Secrets application on active profile: + +```shell-session +$ hcp vault-secrets secrets versions test_secret +``` + +List all versions of a secret under the specified Vault Secrets application: + +```shell-session +$ hcp vault-secrets secrets versions test_secret --app test-app +``` + +## Positional arguments + +- `NAME` - The name of the secret. + +## Inherited Flags + +- `--app=NAME` - The name of the Vault Secrets application. If not specified, the value from the active profile will be used. + diff --git a/content/hcp-docs/content/docs/cli/commands/version.mdx b/content/hcp-docs/content/docs/cli/commands/version.mdx new file mode 100644 index 0000000000..c9940ed512 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/version.mdx @@ -0,0 +1,18 @@ +--- +page_title: hcp version +description: |- + The "hcp version" command lets you display the HCP CLI version. +--- + +# hcp version + +Command: `hcp version` + +The `hcp version` command displays the HCP CLI version. + +## Usage + +```shell-session +$ hcp version [Optional Flags] +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/actions/create.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/actions/create.mdx new file mode 100644 index 0000000000..3caa6324d0 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/actions/create.mdx @@ -0,0 +1,33 @@ +--- +page_title: hcp waypoint actions create reference +description: |- + The `hcp waypoint actions create` command creates a new action configuration in HCP Waypoint. +--- + +# hcp waypoint actions create reference + +Command: `hcp waypoint actions create` + +The `hcp waypoint actions create` command creates a new action to be used to +launch an action with. + +## Usage + +```shell-session +$ hcp waypoint actions create [Optional Flags] +``` + +## Flags + +- `--body` - The request body to submit when running the action. + +- `-d, --description` - The description of the action. + +- `--header [Repeatable]` - The headers to include in the request. This flag can be specified multiple times. + +- `--method` - The HTTP method to use when making the request. + +- `-n, --name` - The name of the action. + +- `--url` - The URL of the action. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/actions/delete.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/actions/delete.mdx new file mode 100644 index 0000000000..91670e7021 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/actions/delete.mdx @@ -0,0 +1,23 @@ +--- +page_title: hcp waypoint actions delete reference +description: |- + The `hcp waypoint actions delete` command deletes an existing action from HCP Waypoint. +--- + +# hcp waypoint actions delete reference + +Command: `hcp waypoint actions delete` + +The `hcp waypoint actions delete` command deletes an existing action. This will +remove the action completely from HCP Waypoint. + +## Usage + +```shell-session +$ hcp waypoint actions delete --name [Optional Flags] +``` + +## Required flags + +- `-n, --name` - The name of the action to delete. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/actions/index.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/actions/index.mdx new file mode 100644 index 0000000000..52884a4cdb --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/actions/index.mdx @@ -0,0 +1,29 @@ +--- +page_title: hcp waypoint actions overview +description: |- + In HCP Waypoint, you can create actions to interact with upstream APIs. Use `hcp waypoint actions` commands to manage actions and their options. +--- + +# hcp waypoint actions overview + +Command: `hcp waypoint actions` + +The `hcp waypoint actions` command group manages all action options for HCP +Waypoint. An action is a set of options that define how an action is executed. +This includes the action request type, and the action name. The action is used +to launch action runs depending on the Request type. + +## Usage + +```shell-session +$ hcp waypoint actions [Optional Flags] +``` + +## Commands + +- [`create`](/hcp/docs/cli/commands/waypoint/actions/create) - Create a new action configuration. +- [`read`](/hcp/docs/cli/commands/waypoint/actions/read) - Read more details about an action. +- [`update`](/hcp/docs/cli/commands/waypoint/actions/update) - Update a action. +- [`delete`](/hcp/docs/cli/commands/waypoint/actions/delete) - Delete an existing action. +- [`list`](/hcp/docs/cli/commands/waypoint/actions/list) - List all known actions. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/actions/list.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/actions/list.mdx new file mode 100644 index 0000000000..7727394f57 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/actions/list.mdx @@ -0,0 +1,19 @@ +--- +page_title: hcp waypoint actions list reference +description: |- + The `hcp waypoint actions list` command lists all of your actions in HCP Waypoint. +--- + +# hcp waypoint actions list reference + +Command: `hcp waypoint actions list` + +The `hcp waypoint actions list` command lists all known actions from HCP +Waypoint. + +## Usage + +```shell-session +$ hcp waypoint actions list [Optional Flags] +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/actions/read.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/actions/read.mdx new file mode 100644 index 0000000000..fdc7790fbe --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/actions/read.mdx @@ -0,0 +1,23 @@ +--- +page_title: hcp waypoint actions read reference +description: |- + The `hcp waypoint actions read` command returns an action's details from HCP Waypoint. +--- + +# hcp waypoint actions read reference + +Command: `hcp waypoint actions read` + +The `hcp waypoint actions read` command returns more details about an action +configurations. + +## Usage + +```shell-session +$ hcp waypoint actions read --name [Optional Flags] +``` + +## Required flags + +- `-n, --name` - The name of the action. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/actions/update.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/actions/update.mdx new file mode 100644 index 0000000000..279caf0f23 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/actions/update.mdx @@ -0,0 +1,35 @@ +--- +page_title: hcp waypoint actions update reference +description: |- + The `hcp waypoint actions update` command updates an action in HCP Waypoint. +--- + +# hcp waypoint actions update reference + +Command: `hcp waypoint actions update` + +The `hcp waypoint actions update` command updates a action to be used to launch +an action with. + +## Usage + +```shell-session +$ hcp waypoint actions update --name [Optional Flags] +``` + +## Required flags + +- `-n, --name` - The name of the action. + +## Optional flags + +- `--body` - The request body to submit when running the action. + +- `-d, --description` - The description of the action. + +- `--header [Repeatable]` - The headers to include in the request. This flag can be specified multiple times. + +- `--method` - The HTTP method to use when making the request. + +- `--url` - The URL of the action. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/create.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/create.mdx new file mode 100644 index 0000000000..63d2b7e3b8 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/create.mdx @@ -0,0 +1,42 @@ +--- +page_title: hcp waypoint add-ons create reference +description: |- + The `hcp waypoint add-ons create` command creates a new HCP Waypoint add-on. +--- + +# hcp waypoint add-ons create reference + +Command: `hcp waypoint add-ons create` + +The `hcp waypoint add-ons create` command creates a new HCP Waypoint add-on. + +## Usage + +```shell-session +$ hcp waypoint add-ons create --add-on-definition-name=NAME --app=NAME [Optional + Flags] +``` + +## Examples + +Create a new HCP Waypoint add-on: + +```shell-session +$ hcp waypoint add-ons create -n=my-addon -a=my-application +-d=my-addon-definition +``` + +## Required flags + +- `--add-on-definition-name=NAME` - The name of the add-on definition to use. + +- `--app=NAME` - The name of the application to which the add-on will be added. + +## Optional flags + +- `-n, --name=NAME` - The name of the add-on. If no name is provided, a name will be generated. + +- `--var=KEY=VALUE [Repeatable]` - A variable to be used in the application. The flag can be repeated to specify multiple variables. Variables specified with the flag will override variables specified in a file. + +- `--var-file=FILE` - A file containing variables to be used in the application. The file should be in HCL format Variables in the file will be overridden by variables specified with the --var flag. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/create.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/create.mdx new file mode 100644 index 0000000000..439a5c159a --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/create.mdx @@ -0,0 +1,66 @@ +--- +page_title: hcp waypoint add-ons definitions create reference +description: |- + The `hcp waypoint add-ons definitions create` command creates a new HCP Waypoint add-on definition. +--- + +# hcp waypoint add-ons definitions create reference + +Command: `hcp waypoint add-ons definitions create` + +The `hcp waypoint add-ons definitions create` command lets you create HCP +Waypoint add-on definitions. + +## Usage + +```shell-session +$ hcp waypoint add-ons definitions create + --tfc-no-code-module-source=TFC_NO_CODE_MODULE_SOURCE + --tfc-project-id=TFC_PROJECT_ID --tfc-project-name=TFC_PROJECT_NAME [Optional + Flags] +``` + +## Examples + +Create a new HCP Waypoint add-on definition: + +```shell-session +$ hcp waypoint add-ons definitions create -n=my-add-on-definition \ + -s="My Add-on Definition summary." \ + -d="My Add-on Definition description." \ + --readme-markdown-template-file="README.tpl" \ + --tfc-no-code-module-source="app.terraform.io/hashicorp/dir/template" \ + --tfc-project-name="my-tfc-project" \ + --tfc-project-id="prj-123456" \ + -l=label1 \ + -l=label2 +``` + +## Required flags + +- `--tfc-no-code-module-source=TFC_NO_CODE_MODULE_SOURCE` - The source of the Terraform no-code module. The expected format is + "NAMESPACE/NAME/PROVIDER". An optional "HOSTNAME/" can be added at the beginning + for a private registry. + +- `--tfc-project-id=TFC_PROJECT_ID` - The ID of the Terraform Cloud project where applications using this add-on definition will be created. + +- `--tfc-project-name=TFC_PROJECT_NAME` - The name of the Terraform Cloud project where applications using this add-on definition will be created. + +## Optional flags + +- `-d, --description=DESCRIPTION` - The description of the add-on definition. + +- `-l, --label=LABEL [Repeatable]` - A label to apply to the add-on definition. + +- `-n, --name=NAME` - The name of the add-on definition. + +- `--readme-markdown-template-file=README_MARKDOWN_TEMPLATE_FILE_PATH` - The file containing the README markdown template. + +- `-s, --summary=SUMMARY` - The summary of the add-on definition. + +- `--tf-agent-pool-id=TF_AGENT_POOL_ID` - The ID of the Terraform agent pool to use for running Terraform operations. This is only applicable when the execution mode is set to 'agent'. + +- `--tf-execution-mode=TF_EXECUTION_MODE` - The execution mode of the HCP Terraform workspaces for add-ons using this add-on definition. + +- `--variable-options-file=VARIABLE_OPTIONS_FILE` - The file containing the HCL definition of Variable Options. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/delete.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/delete.mdx new file mode 100644 index 0000000000..17718cff23 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/delete.mdx @@ -0,0 +1,31 @@ +--- +page_title: hcp waypoint add-ons definitions delete reference +description: |- + The `hcp waypoint add-ons definitions delete` command deletes an HCP Waypoint add-on definition. +--- + +# hcp waypoint add-ons definitions delete reference + +Command: `hcp waypoint add-ons definitions delete` + +The `hcp waypoint add-ons definitions delete` command lets you delete an +existing HCP Waypoint add-on definition. + +## Usage + +```shell-session +$ hcp waypoint add-ons definitions delete --name=NAME [Optional Flags] +``` + +## Examples + +Delete an HCP Waypoint add-on definition: + +```shell-session +$ hcp waypoint add-ons definitions delete -n=my-addon-definition +``` + +## Required flags + +- `-n, --name=NAME` - The name of the add-on definition to be deleted. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/index.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/index.mdx new file mode 100644 index 0000000000..d91f1d4181 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/index.mdx @@ -0,0 +1,27 @@ +--- +page_title: hcp waypoint add-ons definitions overview +description: |- + Create add-on definitions to standardize supporting infrastructure and services for HCP Waypoint applications. Use `hcp waypoint add-ons definitions` commands to manage add-on definitions. +--- + +# hcp waypoint add-ons definitions overview + +Command: `hcp waypoint add-ons definitions` + +The `hcp waypoint add-ons definitions` command group lets you manage HCP +Waypoint add-on definitions. + +## Usage + +```shell-session +$ hcp waypoint add-ons definitions [Optional Flags] +``` + +## Commands + +- [`create`](/hcp/docs/cli/commands/waypoint/add-ons/definitions/create) - Create a new HCP Waypoint add-on definition. +- [`delete`](/hcp/docs/cli/commands/waypoint/add-ons/definitions/delete) - Delete an HCP Waypoint add-on definition. +- [`list`](/hcp/docs/cli/commands/waypoint/add-ons/definitions/list) - List all known HCP Waypoint add-on definitions. +- [`read`](/hcp/docs/cli/commands/waypoint/add-ons/definitions/read) - Read an HCP Waypoint add-on definition. +- [`update`](/hcp/docs/cli/commands/waypoint/add-ons/definitions/update) - Update a HCP Waypoint add-on definition. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/list.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/list.mdx new file mode 100644 index 0000000000..74bc41a940 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/list.mdx @@ -0,0 +1,27 @@ +--- +page_title: hcp waypoint add-ons definitions list reference +description: |- + The `hcp waypoint add-ons definitions list` command lists all of your HCP Waypoint add-on definitions. +--- + +# hcp waypoint add-ons definitions list reference + +Command: `hcp waypoint add-ons definitions list` + +The `hcp waypoint add-ons definitions list` command lets you list all existing +HCP Waypoint add-on definitions. + +## Usage + +```shell-session +$ hcp waypoint add-ons definitions list [Optional Flags] +``` + +## Examples + +List all known HCP Waypoint add-on definitions: + +```shell-session +$ hcp waypoint add-ons definitions list +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/read.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/read.mdx new file mode 100644 index 0000000000..00b45b129a --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/read.mdx @@ -0,0 +1,31 @@ +--- +page_title: hcp waypoint add-ons definitions read reference +description: |- + The `hcp waypoint add-ons definitions read` command lists details about an existing HCP Waypoint add-on definition. +--- + +# hcp waypoint add-ons definitions read reference + +Command: `hcp waypoint add-ons definitions read` + +The `hcp waypoint add-ons definitions read` command lets you read an existing +HCP Waypoint add-on definition. + +## Usage + +```shell-session +$ hcp waypoint add-ons definitions read --name=NAME [Optional Flags] +``` + +## Examples + +Read an HCP Waypoint add-on definition: + +```shell-session +$ hcp waypoint add-ons definitions read -n=my-addon-definition +``` + +## Required flags + +- `-n, --name=NAME` - The name of the add-on definition. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/update.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/update.mdx new file mode 100644 index 0000000000..d45097280d --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/definitions/update.mdx @@ -0,0 +1,58 @@ +--- +page_title: hcp waypoint add-ons definitions update reference +description: |- + The `hcp waypoint add-ons definitions update` command updates an existing HCP Waypoint add-on definition. +--- + +# hcp waypoint add-ons definitions update reference + +Command: `hcp waypoint add-ons definitions update` + +The `hcp waypoint add-ons definitions update` command lets you update an +existing HCP Waypoint add-on definition. + +## Usage + +```shell-session +$ hcp waypoint add-ons definitions update --name=NAME [Optional Flags] +``` + +## Examples + +Update a HCP Waypoint add-on definition: + +```shell-session +$ hcp waypoint add-ons definitions update -n=my-add-on-definition \ + -s="My updated Add-on Definition summary." \ + -d="My updated Add-on Definition description." \ + --readme-markdown-template-file "README.tpl" \ + --tfc-project-name="my-tfc-project" \ + --tfc-project-id="prj-123456" \ + -l=label1 \ + -l=label2 +``` + +## Required flags + +- `-n, --name=NAME` - The name of the add-on definition. + +## Optional flags + +- `-d, --description=DESCRIPTION` - The description of the add-on definition. + +- `-l, --label=LABEL [Repeatable]` - A label to apply to the add-on definition. + +- `--readme-markdown-template-file=README_MARKDOWN_TEMPLATE_FILE` - The README markdown template file. + +- `-s, --summary=SUMMARY` - The summary of the add-on definition. + +- `--tf-agent-pool-id=TF_AGENT_POOL_ID` - The ID of the Terraform agent pool to use for running Terraform operations. This is only applicable when the execution mode is set to 'agent'. + +- `--tf-execution-mode=TF_EXECUTION_MODE` - The execution mode of the HCP Terraform workspaces for add-ons using this add-on definition. + +- `--tfc-project-id=TFC_PROJECT_ID` - The Terraform Cloud project ID. + +- `--tfc-project-name=TFC_PROJECT_NAME` - The Terraform Cloud project name. + +- `--variable-options-file=VARIABLE_OPTIONS_FILE` - The file containing the HCL definition of Variable Options. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/destroy.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/destroy.mdx new file mode 100644 index 0000000000..bb29bb55f0 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/destroy.mdx @@ -0,0 +1,31 @@ +--- +page_title: hcp waypoint add-ons destroy reference +description: |- + The `hcp waypoint add-ons destroy` command destroys an existing HCP Waypoint add-on. +--- + +# hcp waypoint add-ons destroy reference + +Command: `hcp waypoint add-ons destroy` + +The `hcp waypoint add-ons destroy` command lets you destroy an existing HCP +Waypoint add-on. + +## Usage + +```shell-session +$ hcp waypoint add-ons destroy --name=NAME [Optional Flags] +``` + +## Examples + +Destroy an HCP Waypoint add-on: + +```shell-session +$ hcp waypoint add-ons destroy -n=my-addon +``` + +## Required flags + +- `-n, --name=NAME` - The name of the add-on to destroy. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/index.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/index.mdx new file mode 100644 index 0000000000..26646c0d94 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/index.mdx @@ -0,0 +1,30 @@ +--- +page_title: hcp waypoint add-ons overview +description: |- + Add-ons are components you can use to install services during an application's lifecycle. Use `hcp waypoint add-ons` commands to manage HCP Waypoint add-ons and add-on definitions. +--- + +# hcp waypoint add-ons overview + +Command: `hcp waypoint add-ons` + +The `hcp waypoint add-ons` command group lets you manage HCP Waypoint add-ons +and add-on definitions. + +## Usage + +```shell-session +$ hcp waypoint add-ons [Optional Flags] +``` + +## Command groups + +- [`definitions`](/hcp/docs/cli/commands/waypoint/add-ons/definitions) - Manage HCP Waypoint add-on definitions. + +## Commands + +- [`create`](/hcp/docs/cli/commands/waypoint/add-ons/create) - Create a new HCP Waypoint add-on. +- [`destroy`](/hcp/docs/cli/commands/waypoint/add-ons/destroy) - Destroy an HCP Waypoint add-ons. +- [`read`](/hcp/docs/cli/commands/waypoint/add-ons/read) - Read an HCP Waypoint add-on. +- [`list`](/hcp/docs/cli/commands/waypoint/add-ons/list) - List HCP Waypoint add-ons. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/list.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/list.mdx new file mode 100644 index 0000000000..485b85de38 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/list.mdx @@ -0,0 +1,37 @@ +--- +page_title: hcp waypoint add-ons list reference +description: |- + The `hcp waypoint add-ons list` command lists all of your HCP Waypoint add-ons. +--- + +# hcp waypoint add-ons list reference + +Command: `hcp waypoint add-ons list` + +The `hcp waypoint add-ons list` command lists HCP Waypoint add-ons. By +supplying the "name" flag, you can list add-ons for a specific application. + +## Usage + +```shell-session +$ hcp waypoint add-ons list [Optional Flags] +``` + +## Examples + +List all HCP Waypoint add-ons: + +```shell-session +$ hcp waypoint add-ons list +``` + +List HCP Waypoint add-ons for a specific application: + +```shell-session +$ hcp waypoint add-ons list --application-name my-application +``` + +## Flags + +- `--application-name=APPLICATION_NAME` - The name of the application to list add-ons for. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/read.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/read.mdx new file mode 100644 index 0000000000..4de2f11f68 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/add-ons/read.mdx @@ -0,0 +1,31 @@ +--- +page_title: hcp waypoint add-ons read reference +description: |- + The `hcp waypoint add-ons read` command reads an existing HCP Waypoint add-on. +--- + +# hcp waypoint add-ons read reference + +Command: `hcp waypoint add-ons read` + +The `hcp waypoint add-ons read` command lets you read an existing HCP Waypoint +add-on. + +## Usage + +```shell-session +$ hcp waypoint add-ons read --name=NAME [Optional Flags] +``` + +## Examples + +Read an HCP Waypoint add-on: + +```shell-session +$ hcp waypoint add-ons read -n=my-addon +``` + +## Required flags + +- `-n, --name=NAME` - The name of the add-on. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/agent/group/create.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/agent/group/create.mdx new file mode 100644 index 0000000000..680396e33f --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/agent/group/create.mdx @@ -0,0 +1,34 @@ +--- +page_title: hcp waypoint agent group create reference +description: |- + The `hcp waypoint agent group create` command creates a new HCP Waypoint agent group. +--- + +# hcp waypoint agent group create reference + +Command: `hcp waypoint agent group create` + +The `hcp waypoint agent group create` command creates a new Agent group. + +## Usage + +```shell-session +$ hcp waypoint agent group create --name=NAME [Optional Flags] +``` + +## Examples + +Create a new group: + +```shell-session +$ hcp waypoint agent group create -n='prod:us-west-2' -d='us west production access' +``` + +## Required flags + +- `-n, --name=NAME` - Name for the new group. + +## Optional flags + +- `-d, --description=TEXT` - Description for the group. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/agent/group/delete.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/agent/group/delete.mdx new file mode 100644 index 0000000000..b9954ccf88 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/agent/group/delete.mdx @@ -0,0 +1,30 @@ +--- +page_title: hcp waypoint agent group delete reference +description: |- + The `hcp waypoint agent group delete` command deletes an HCP Waypoint agent group. +--- + +# hcp waypoint agent group delete reference + +Command: `hcp waypoint agent group delete` + +The `hcp waypoint agent group delete` command deletes an Agent group. + +## Usage + +```shell-session +$ hcp waypoint agent group delete --name=NAME [Optional Flags] +``` + +## Examples + +Delete a group: + +```shell-session +$ hcp waypoint agent group delete -n='prod:us-west-2' +``` + +## Required flags + +- `-n, --name=NAME` - Name of the group to delete. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/agent/group/index.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/agent/group/index.mdx new file mode 100644 index 0000000000..2c0373eb04 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/agent/group/index.mdx @@ -0,0 +1,24 @@ +--- +page_title: hcp waypoint agent group overview +description: |- + HCP Waypoint agents let you execute actions inside private environments. Use `hcp waypoint agent group` commands to manage HCP Waypoint agent groups. +--- + +# hcp waypoint agent group overview + +Command: `hcp waypoint agent group` + +The `hcp waypoint agent group` command group manages agent groups. + +## Usage + +```shell-session +$ hcp waypoint agent group [Optional Flags] +``` + +## Commands + +- [`create`](/hcp/docs/cli/commands/waypoint/agent/group/create) - Create a new HCP Waypoint Agent group. +- [`list`](/hcp/docs/cli/commands/waypoint/agent/group/list) - List HCP Waypoint Agent groups. +- [`delete`](/hcp/docs/cli/commands/waypoint/agent/group/delete) - Delete a HCP Waypoint Agent group. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/agent/group/list.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/agent/group/list.mdx new file mode 100644 index 0000000000..3870c7eafc --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/agent/group/list.mdx @@ -0,0 +1,26 @@ +--- +page_title: hcp waypoint agent group list reference +description: |- + The `hcp waypoint agent group list` command lists all of your HCP Waypoint agent groups. +--- + +# hcp waypoint agent group list reference + +Command: `hcp waypoint agent group list` + +The `hcp waypoint agent group list` command lists groups registered. + +## Usage + +```shell-session +$ hcp waypoint agent group list [Optional Flags] +``` + +## Examples + +List all groups: + +```shell-session +$ hcp waypoint agent group list +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/agent/index.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/agent/index.mdx new file mode 100644 index 0000000000..97b878e9bc --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/agent/index.mdx @@ -0,0 +1,29 @@ +--- +page_title: hcp waypoint agent overview +description: |- + HCP Waypoint agents let you execute actions inside private environments. Use `hcp waypoint agent` commands to run and manage HCP Waypoint agents. +--- + +# hcp waypoint agent overview + +Command: `hcp waypoint agent` + +The `hcp waypoint agent` command group lets you run and manage a local Waypoint agent. + +Agents are a type of action that let you execute tasks in your private, on-premises environments. + +## Usage + +```shell-session +$ hcp waypoint agent [Optional Flags] +``` + +## Command groups + +- [`group`](/hcp/docs/cli/commands/waypoint/agent/group) - Manage HCP Waypoint Agent groups. + +## Commands + +- [`run`](/hcp/docs/cli/commands/waypoint/agent/run) - Start the Waypoint Agent. +- [`queue`](/hcp/docs/cli/commands/waypoint/agent/queue) - Queue an operation for an agent to execute. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/agent/queue.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/agent/queue.mdx new file mode 100644 index 0000000000..4700bd5067 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/agent/queue.mdx @@ -0,0 +1,30 @@ +--- +page_title: hcp waypoint agent queue reference +description: |- + The `hcp waypoint agent queue` command queues an operation for an HCP Waypoint agent to execute. +--- + +# hcp waypoint agent queue reference + +Command: `hcp waypoint agent queue` + +The `hcp waypoint agent queue` command queues an operation for an agent to run. + +## Usage + +```shell-session +$ hcp waypoint agent queue --group=NAME --id=ID [Optional Flags] +``` + +## Required flags + +- `-g, --group=NAME` - Agent group to run for operations on. + +- `-i, --id=ID` - Id of the operation to run. + +## Optional flags + +- `--action-run=ID` - Action run to associate operation with. + +- `-d, --body=JSON` - JSON to pass to operation. Use @filename to read json from a file. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/agent/run.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/agent/run.mdx new file mode 100644 index 0000000000..7d57bbeb9a --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/agent/run.mdx @@ -0,0 +1,22 @@ +--- +page_title: hcp waypoint agent run reference +description: |- + The `hcp waypoint agent run` command starts an existing HCP Waypoint agent. +--- + +# hcp waypoint agent run reference + +Command: `hcp waypoint agent run` + +The `hcp waypoint agent run` command executes a local Waypoint Agent. + +## Usage + +```shell-session +$ hcp waypoint agent run [Optional Flags] +``` + +## Flags + +- `-c, --config=PATH` - Path to configuration file for agent. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/applications/create.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/applications/create.mdx new file mode 100644 index 0000000000..010799b150 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/applications/create.mdx @@ -0,0 +1,42 @@ +--- +page_title: hcp waypoint applications create reference +description: |- + The `hcp waypoint applications create` command creates a new HCP Waypoint application. +--- + +# hcp waypoint applications create reference + +Command: `hcp waypoint applications create` + +The `hcp waypoint applications create` command lets you create a new HCP +Waypoint application. + +## Usage + +```shell-session +$ hcp waypoint applications create --name=NAME --template-name=TEMPLATE_NAME + [Optional Flags] +``` + +## Examples + +Create a new HCP Waypoint application: + +```shell-session +$ hcp waypoint application create -n=my-application -t=my-template +``` + +## Required flags + +- `-n, --name=NAME` - The name of the application. + +- `-t, --template-name=TEMPLATE_NAME` - The name of the template to use for the application. + +## Optional flags + +- `--action-config-name=ACTION_CONFIG_NAME [Repeatable]` - The name of the action configuration to be added to the application. + +- `--var=KEY=VALUE [Repeatable]` - A variable to be used in the application. The flag can be repeated to specify multiple variables. Variables specified with the flag will override variables specified in a file. + +- `--var-file=FILE` - A file containing variables to be used in the application. The file should be in HCL format Variables in the file will be overridden by variables specified with the --var flag. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/applications/destroy.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/applications/destroy.mdx new file mode 100644 index 0000000000..658733def4 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/applications/destroy.mdx @@ -0,0 +1,31 @@ +--- +page_title: hcp waypoint applications destroy reference +description: |- + The `hcp waypoint applications destroy` command destroys an existing HCP Waypoint application and its infrastructure. +--- + +# hcp waypoint applications destroy reference + +Command: `hcp waypoint applications destroy` + +The `hcp waypoint applications destroy` command lets you destroy an HCP +Waypoint application and its infrastructure. + +## Usage + +```shell-session +$ hcp waypoint applications destroy --name=NAME [Optional Flags] +``` + +## Examples + +Destroy an HCP Waypoint application: + +```shell-session +$ hcp waypoint applications destroy -n=my-application +``` + +## Required flags + +- `-n, --name=NAME` - The name of the application to destroy. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/applications/index.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/applications/index.mdx new file mode 100644 index 0000000000..584711c7d7 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/applications/index.mdx @@ -0,0 +1,27 @@ +--- +page_title: hcp waypoint applications overview +description: |- + An HCP Waypoint application is a deployed instance of a software service created with a pre-defined template. Use `hcp waypoint applications` commands to manage HCP Waypoint applications. +--- + +# hcp waypoint applications overview + +Command: `hcp waypoint applications` + +The `hcp waypoint applications` command group lets you manage HCP Waypoint +applications. + +## Usage + +```shell-session +$ hcp waypoint applications [Optional Flags] +``` + +## Commands + +- [`create`](/hcp/docs/cli/commands/waypoint/applications/create) - Create a new HCP Waypoint application. +- [`destroy`](/hcp/docs/cli/commands/waypoint/applications/destroy) - Destroy an HCP Waypoint application and its infrastructure. +- [`list`](/hcp/docs/cli/commands/waypoint/applications/list) - List all HCP Waypoint applications. +- [`read`](/hcp/docs/cli/commands/waypoint/applications/read) - Read details about an HCP Waypoint application. +- [`update`](/hcp/docs/cli/commands/waypoint/applications/update) - Update an existing HCP Waypoint application. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/applications/list.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/applications/list.mdx new file mode 100644 index 0000000000..593d64f004 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/applications/list.mdx @@ -0,0 +1,19 @@ +--- +page_title: hcp waypoint applications list reference +description: |- + The `hcp waypoint applications list` command lists all of your HCP Waypoint applications. +--- + +# hcp waypoint applications list reference + +Command: `hcp waypoint applications list` + +The `hcp waypoint applications list` command lists all HCP Waypoint +applications. + +## Usage + +```shell-session +$ hcp waypoint applications list [Optional Flags] +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/applications/read.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/applications/read.mdx new file mode 100644 index 0000000000..eb3a077860 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/applications/read.mdx @@ -0,0 +1,31 @@ +--- +page_title: hcp waypoint applications read reference +description: |- + The `hcp waypoint applications read` command reads details about an existing HCP Waypoint application. +--- + +# hcp waypoint applications read reference + +Command: `hcp waypoint applications read` + +The `hcp waypoint applications read` command lets you read details about an HCP +Waypoint application. + +## Usage + +```shell-session +$ hcp waypoint applications read --name=NAME [Optional Flags] +``` + +## Examples + +Read an HCP Waypoint application: + +```shell-session +$ hcp waypoint applications read -n=my-application +``` + +## Required flags + +- `-n, --name=NAME` - The name of the HCP Waypoint application. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/applications/update.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/applications/update.mdx new file mode 100644 index 0000000000..3e3d4dee04 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/applications/update.mdx @@ -0,0 +1,38 @@ +--- +page_title: hcp waypoint applications update reference +description: |- + The `hcp waypoint applications update` command updates an existing HCP Waypoint application. +--- + +# hcp waypoint applications update reference + +Command: `hcp waypoint applications update` + +The `hcp waypoint applications update` command lets you update an existing HCP +Waypoint application. + +## Usage + +```shell-session +$ hcp waypoint applications update --name=NAME [Optional Flags] +``` + +## Examples + +Update an existing HCP Waypoint application: + +```shell-session +$ hcp waypoint applications update -n=my-application --action-config-name +my-action-config +``` + +## Required flags + +- `-n, --name=NAME` - The name of the HCP Waypoint application to update. + +## Optional flags + +- `--action-config-name=ACTION_CONFIG_NAME [Repeatable]` - The name of the action configuration to be added to the application. + +- `--readme-markdown-file=README_MARKDOWN_FILE` - The path to the README markdown file to be used for the application. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/index.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/index.mdx new file mode 100644 index 0000000000..204a61c482 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/index.mdx @@ -0,0 +1,29 @@ +--- +page_title: hcp waypoint overview +description: |- + Use `hcp waypoint` commands to manage HCP Waypoint resources with the HCP CLI. +--- + +# hcp waypoint overview + +Command: `hcp waypoint` + +The `hcp waypoint` command group lets you manage HCP Waypoint resources through +the CLI. These commands let you to interact with their HCP Waypoint instance to +manage their application deployment process. + +## Usage + +```shell-session +$ hcp waypoint [Optional Flags] +``` + +## Command groups + +- [`tfc-config`](/hcp/docs/cli/commands/waypoint/tfc-config) - Manage Terraform Cloud Configurations. +- [`actions`](/hcp/docs/cli/commands/waypoint/actions) - Manage action configuration options for HCP Waypoint. +- [`agent`](/hcp/docs/cli/commands/waypoint/agent) - Run and manage a Waypoint Agent. +- [`templates`](/hcp/docs/cli/commands/waypoint/templates) - Manage HCP Waypoint templates. +- [`add-ons`](/hcp/docs/cli/commands/waypoint/add-ons) - Manage HCP Waypoint add-ons and add-on definitions. +- [`applications`](/hcp/docs/cli/commands/waypoint/applications) - Manage HCP Waypoint applications. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/templates/create.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/templates/create.mdx new file mode 100644 index 0000000000..e69db5d2d2 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/templates/create.mdx @@ -0,0 +1,66 @@ +--- +page_title: hcp waypoint templates create reference +description: |- + The `hcp waypoint templates create` command creates a new HCP Waypoint template. +--- + +# hcp waypoint templates create reference + +Command: `hcp waypoint templates create` + +The `hcp waypoint templates create` command lets you create HCP Waypoint +templates. + +## Usage + +```shell-session +$ hcp waypoint templates create --name=NAME --summary=SUMMARY + --tfc-no-code-module-source=TFC_NO_CODE_MODULE_SOURCE + --tfc-project-id=TFC_PROJECT_ID --tfc-project-name=TFC_PROJECT_NAME [Optional + Flags] +``` + +## Examples + +Create a new HCP Waypoint template: + +```shell-session +$ hcp waypoint templates create -n=my-template \ + -s="My Template Summary" \ + -d="My Template Description" \ + --readme-markdown-template-file "README.tpl" \ + --tfc-no-code-module-source="app.terraform.io/hashicorp/dir/template" \ + --tfc-project-name="my-tfc-project" \ + --tfc-project-id="prj-123456" \ + -l="label1" \ + -l="label2" +``` + +## Required flags + +- `-n, --name=NAME` - The name of the template. + +- `-s, --summary=SUMMARY` - The summary of the template. + +- `--tfc-no-code-module-source=TFC_NO_CODE_MODULE_SOURCE` - The source of the Terraform no-code module. The expected format is + "NAMESPACE/NAME/PROVIDER". An optional "HOSTNAME/" can be added at the beginning + for a private registry. + +- `--tfc-project-id=TFC_PROJECT_ID` - The ID of the HCP Terraform project where applications using this template will be created. + +- `--tfc-project-name=TFC_PROJECT_NAME` - The name of the Terraform Cloud project where applications using this template will be created. + +## Optional flags + +- `-d, --description=DESCRIPTION` - The description of the template. + +- `-l, --label=LABEL [Repeatable]` - A label to apply to the template. + +- `--readme-markdown-template-file=README_MARKDOWN_TEMPLATE_FILE_PATH` - The file containing the README markdown template. + +- `--tf-agent-pool-id=TF_AGENT_POOL_ID` - The ID of the Terraform agent pool to use for running Terraform operations. This is only applicable when the execution mode is set to 'agent'. + +- `--tf-execution-mode=TF_EXECUTION_MODE` - The execution mode of the HCP Terraform workspaces for applications using this template. + +- `--variable-options-file=VARIABLE_OPTIONS_FILE` - The file containing the HCL definition of Variable Options. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/templates/delete.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/templates/delete.mdx new file mode 100644 index 0000000000..7520d2fecd --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/templates/delete.mdx @@ -0,0 +1,31 @@ +--- +page_title: hcp waypoint templates delete reference +description: |- + The `hcp waypoint templates delete` command deletes an existing HCP Waypoint template. +--- + +# hcp waypoint templates delete reference + +Command: `hcp waypoint templates delete` + +The `hcp waypoint templates delete` command lets you delete existing HCP +Waypoint templates. + +## Usage + +```shell-session +$ hcp waypoint templates delete --name=NAME [Optional Flags] +``` + +## Examples + +Delete an existing HCP Waypoint template: + +```shell-session +$ hcp waypoint templates delete -n=my-template +``` + +## Required flags + +- `-n, --name=NAME` - The name of the template to be deleted. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/templates/index.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/templates/index.mdx new file mode 100644 index 0000000000..a67e22f555 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/templates/index.mdx @@ -0,0 +1,27 @@ +--- +page_title: hcp waypoint templates overview +description: |- + HCP Waypoint templates are patterns created for app developers to deploy standardized infrastructure for applications. Use `hcp waypoint templates` commands to manage templates. +--- + +# hcp waypoint templates overview + +Command: `hcp waypoint templates` + +The `hcp waypoint templates` command group lets you manage HCP Waypoint +templates. A template is a reusable configuration for creating applications. + +## Usage + +```shell-session +$ hcp waypoint templates [Optional Flags] +``` + +## Commands + +- [`create`](/hcp/docs/cli/commands/waypoint/templates/create) - Create a new HCP Waypoint template. +- [`delete`](/hcp/docs/cli/commands/waypoint/templates/delete) - Delete an existing Waypoint template. +- [`read`](/hcp/docs/cli/commands/waypoint/templates/read) - Read more details about an HCP Waypoint template. +- [`list`](/hcp/docs/cli/commands/waypoint/templates/list) - List all HCP Waypoint templates. +- [`update`](/hcp/docs/cli/commands/waypoint/templates/update) - Update an existing HCP Waypoint template. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/templates/list.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/templates/list.mdx new file mode 100644 index 0000000000..0af5f1fbd2 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/templates/list.mdx @@ -0,0 +1,27 @@ +--- +page_title: hcp waypoint templates list reference +description: |- + The `hcp waypoint templates list` command lists all your HCP Waypoint templates. +--- + +# hcp waypoint templates list reference + +Command: `hcp waypoint templates list` + +The `hcp waypoint templates list` command lets you list existing HCP Waypoint +templates. + +## Usage + +```shell-session +$ hcp waypoint templates list [Optional Flags] +``` + +## Examples + +List all HCP Waypoint templates: + +```shell-session +$ hcp waypoint templates list +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/templates/read.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/templates/read.mdx new file mode 100644 index 0000000000..81e1220741 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/templates/read.mdx @@ -0,0 +1,31 @@ +--- +page_title: hcp waypoint templates read reference +description: |- + The `hcp waypoint templates read` command reads details about an HCP Waypoint template. +--- + +# hcp waypoint templates read reference + +Command: `hcp waypoint templates read` + +The `hcp waypoint templates read` command lets you read an existing HCP +Waypoint template. + +## Usage + +```shell-session +$ hcp waypoint templates read --name=NAME [Optional Flags] +``` + +## Examples + +Read an HCP Waypoint template: + +```shell-session +$ hcp waypoint templates read -n=my-template +``` + +## Required flags + +- `-n, --name=NAME` - The name of the template. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/templates/update.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/templates/update.mdx new file mode 100644 index 0000000000..c02439cb9b --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/templates/update.mdx @@ -0,0 +1,58 @@ +--- +page_title: hcp waypoint templates update reference +description: |- + The `hcp waypoint templates update` command updates an existing HCP Waypoint template. +--- + +# hcp waypoint templates update reference + +Command: `hcp waypoint templates update` + +The `hcp waypoint templates update` command lets you update existing HCP +Waypoint templates. + +## Usage + +```shell-session +$ hcp waypoint templates update --name=NAME [Optional Flags] +``` + +## Examples + +Update an HCP Waypoint template: + +```shell-session +$ hcp waypoint templates update -n=my-template \ + -s="My Template Summary" \ + -d="My Template Description" \ + -readme-markdown-template-file "README.tpl" \ + --tfc-project-name="my-tfc-project" \ + --tfc-project-id="prj-123456 \ + -l="label1" \ + -l="label2" +``` + +## Required flags + +- `-n, --name=NAME` - The name of the template to be updated. + +## Optional flags + +- `-d, --description=DESCRIPTION` - The description of the template. + +- `-l, --label=LABEL [Repeatable]` - A label to apply to the template. + +- `--readme-markdown-template-file=README_MARKDOWN_TEMPLATE_FILE_PATH` - The file containing the README markdown template. + +- `-s, --summary=SUMMARY` - The summary of the template. + +- `--tf-agent-pool-id=TF_AGENT_POOL_ID` - The ID of the Terraform agent pool to use for running Terraform operations. This is only applicable when the execution mode is set to 'agent'. + +- `--tf-execution-mode=TF_EXECUTION_MODE` - The execution mode of the HCP Terraform workspaces for applications using this template. + +- `--tfc-project-id=TFC_PROJECT_ID` - The ID of the Terraform Cloud project where applications using this template will be created. + +- `--tfc-project-name=TFC_PROJECT_NAME` - The name of the Terraform Cloud project where applications using this template will be created. + +- `--variable-options-file=VARIABLE_OPTIONS_FILE` - The file containing the HCL definition of Variable Options. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/tfc-config/create.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/tfc-config/create.mdx new file mode 100644 index 0000000000..4af1dd3d58 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/tfc-config/create.mdx @@ -0,0 +1,46 @@ +--- +page_title: hcp waypoint tfc-config create reference +description: |- + The `hcp waypoint tfc-config create` command sets the HCP Terraform organization and team token that HCP Waypoint uses. +--- + +# hcp waypoint tfc-config create reference + +Command: `hcp waypoint tfc-config create` + +The `hcp waypoint tfc-config create` command sets the HCP Terraform organization name and HCP Terraform team token that will be used in HCP Waypoint. + +There can only be one HCP Terraform configuration set for each HCP Project. + +HCP Terraform configurations can be reviewed using the `hcp waypoint tfc-config read` command and removed with the `hcp waypoint tfc-config delete` command. + +## Usage + +```shell-session +$ hcp waypoint tfc-config create TFC_ORG TOKEN [Optional Flags] +``` + +## Examples + +Create a new HCP Terraform configuration in HCP Waypoint: + +```shell-session +$ hcp waypoint tfc-config create example-org +``` + +## Positional arguments + +- `TFC_ORG` - Name of the HCP Terraform organization. + +- `TOKEN` - HCP Terraform team token for the HCP Terraform organization. + + You must set a team token to perform HCP Waypoint commands. + + Refer to the [API tokens + documentation](https://developer.hashicorp.com/terraform/cloud-docs/users-teams-organizations/api-tokens) + to learn more. + + HCP Waypoint requires team level access tokens in order to run correctly. + + Please ensure that your HCP Terraform configuration token has the correct permissions. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/tfc-config/delete.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/tfc-config/delete.mdx new file mode 100644 index 0000000000..c51a2fe0aa --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/tfc-config/delete.mdx @@ -0,0 +1,28 @@ +--- +page_title: hcp waypoint tfc-config delete reference +description: |- + The `hcp waypoint tfc-config delete` command deletes an existing HCP Terraform configuration in HCP Waypoint. +--- + +# hcp waypoint tfc-config delete reference + +Command: `hcp waypoint tfc-config delete` + +The `hcp waypoint tfc-config delete` command deletes +the HCP Terraform organization name and team token that is set for this HCP +Project. Only one HCP Terraform configuration is allowed for each HCP Project. + +## Usage + +```shell-session +$ hcp waypoint tfc-config delete [Optional Flags] +``` + +## Examples + +Delete the saved HCP Terraform configuration from Waypoint for this HCP Project ID: + +```shell-session +$ hcp waypoint tfc-config delete example-org +``` + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/tfc-config/index.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/tfc-config/index.mdx new file mode 100644 index 0000000000..2f79b80be7 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/tfc-config/index.mdx @@ -0,0 +1,24 @@ +--- +page_title: hcp waypoint tfc-config overview +description: |- + Use `hcp waypoint tfc-config` commands to manage your HCP Terraform configurations in HCP Waypoint. +--- + +# hcp waypoint tfc-config overview + +Command: `hcp waypoint tfc-config` + +The `hcp waypoint tfc-config` command group manages the set of HCP Terraform configurations. New HCP Terraform configurations can be created using `hcp waypoint tfc-config create` and existing profiles can be viewed using `hcp waypoint tfc-config read`. + +## Usage + +```shell-session +$ hcp waypoint tfc-config [Optional Flags] +``` + +## Commands + +- [`create`](/hcp/docs/cli/commands/waypoint/tfc-config/create) - Set HCP Terraform configurations. +- [`delete`](/hcp/docs/cli/commands/waypoint/tfc-config/delete) - Delete HCP Terraform configurations. +- [`read`](/hcp/docs/cli/commands/waypoint/tfc-config/read) - Read HCP Terraform configuration properties. + diff --git a/content/hcp-docs/content/docs/cli/commands/waypoint/tfc-config/read.mdx b/content/hcp-docs/content/docs/cli/commands/waypoint/tfc-config/read.mdx new file mode 100644 index 0000000000..aa9fda4136 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/commands/waypoint/tfc-config/read.mdx @@ -0,0 +1,26 @@ +--- +page_title: hcp waypoint tfc-config read reference +description: |- + The `hcp waypoint tfc-config read` command lets you read HCP Terraform configuration details in HCP Waypoint. +--- + +# hcp waypoint tfc-config read reference + +Command: `hcp waypoint tfc-config read` + +The `hcp waypoint tfc-config read` command returns the HCP Terraform organization name and a redacted form of the HCP Terraform team token that is set for this HCP Project. There can only be one HCP Terraform configuration set for each HCP Project. + +## Usage + +```shell-session +$ hcp waypoint tfc-config read [Optional Flags] +``` + +## Examples + +Retrieve the saved HCP Terraform configuration from Waypoint for this HCP Project ID: + +```shell-session +$ hcp waypoint tfc-config read +``` + diff --git a/content/hcp-docs/content/docs/cli/index.mdx b/content/hcp-docs/content/docs/cli/index.mdx new file mode 100644 index 0000000000..95990e05a2 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/index.mdx @@ -0,0 +1,40 @@ +--- +page_title: hcp +description: |- + Interact with HCP. +--- + +# HCP CLI + +@include 'hcp-cli/cli-intro.mdx' + +## Installation + +Refer to the [installation guide](/hcp/docs/cli/install) for instructions on installing the HCP CLI. + +## Getting started + +To get started with the `hcp` CLI, first login to your account: + +```shell-session +$ hcp auth login +``` + +This will open a browser to allow you to interactively login. For other ways to +login, refer to the [`hcp auth login` command +reference](/hcp/docs/cli/commands/auth/login). + +After logging in, configure the CLI to interact with your HCP organization and project. This command will interactively prompt you to select the organization and project. + +```shell-session +$ hcp profile init +``` + +Once you have selected the HCP organization and project, you can use the HCP CLI to manage the resources for that project. If you want to quickly run a command against a different project, pass the `--project` flag to +the command. + +You have configured the HCP CLI! To verify, retrieve a list of HCP organizations you have access to. + +```shell-session +$ hcp organizations list +``` diff --git a/content/hcp-docs/content/docs/cli/install.mdx b/content/hcp-docs/content/docs/cli/install.mdx new file mode 100644 index 0000000000..0a86d241d4 --- /dev/null +++ b/content/hcp-docs/content/docs/cli/install.mdx @@ -0,0 +1,83 @@ +--- +page_title: Install +description: |- + Install the HCP CLI. +--- + +# Install HCP CLI + +The `hcp` CLI is available as a pre-compiled binary or as a package for several +operating systems. + + + + +@include 'hcp-cli/homebrew-install.mdx' + + + + +HashiCorp officially maintains and signs packages for the following Linux +distributions. + + + + +@include 'hcp-cli/apt-install.mdx' + + + + +@include 'hcp-cli/yum-install.mdx' + + + + +@include 'hcp-cli/dnf-install.mdx' + + + + +@include 'hcp-cli/aws-install.mdx' + + + + + + + + +@include 'hcp-cli/manual-install.mdx' + + + + +## Verify the installation + +@include 'hcp-cli/verify-install.mdx' + +## Install tab autocomplete + + + +If you installed the CLI using one of the Linux packages, it will have +automatically installed autocomplete. + + + +Install autocompletion for the HCP CLI. + + ```shell-session + $ hcp -autocomplete-install + ``` + + For the autocomplete to take effect, either create a new terminal session or + source your terminal configuration. + +To uninstall autocompletion for the HCP CLI, use `-autocomplete-uninstall`. + +```shell-session +$ hcp -autocomplete-uninstall +``` + +[gpg-key]: https://apt.releases.hashicorp.com/gpg "HashiCorp GPG key" diff --git a/content/hcp-docs/content/docs/consul/concepts/cluster-management.mdx b/content/hcp-docs/content/docs/consul/concepts/cluster-management.mdx new file mode 100644 index 0000000000..54e895b3dd --- /dev/null +++ b/content/hcp-docs/content/docs/consul/concepts/cluster-management.mdx @@ -0,0 +1,44 @@ +--- +page_title: Cluster management +description: |- + Learn about the types of Consul clusters you can use with HCP, including their role in deployments and the differences between HCP Consul Dedicated and self-managed Community and Enterprise clusters. +--- + +# Cluster management + +@include 'alerts/consul-dedicated-eol.mdx' + +This page explains concepts associated with *HCP Consul Dedicated clusters*. + +## Background + +You may be familiar with clusters in other contexts, but a *cluster* in Consul refers to the group of Consul servers that participate in a datacenter's Raft quorum. Consul servers are deployed in a cluster with either three or five voting members in a typical production scenario, although single server deployments are possible for testing and development purposes. + +For more information about Consul servers and how they work, refer to the following concepts in the [Consul documentation](/consul/docs): + +- [Consul architecture](/consul/docs/architecture) describes the overall role Consul servers play in a deployment +- [Consensus protocol](/consul/docs/architecture/consensus) describes how servers elect leaders to product quorums through the Raft protocol +- [Fault tolerance](/consul/docs/architecture/improving-consul-resilience) describes how the number of Consul servers impact network resiliency + +## HCP Consul Dedicated clusters + +A *HCP Consul Dedicated cluster* is a set of one to three Consul servers that are installed, bootstrapped, and configured by HashiCorp, and hosted in a HCP Consul Dedicated environment. You can create a HCP Consul Dedicated cluster in either an AWS or Azure environment. Because we are responsible for the initial setup process, using HCP Consul Dedicated clusters removes much of the initial operating burden associated with implementing Consul in your service network. We are also responsible for maintaining the hardware and ensuring its availability, freeing up your time and resources for other network operations. + +When you use HCP Consul Dedicated clusters, we deploy and maintain the control plane but you are still responsible for operating the data plane. This includes hosting and maintaining your services in your preferred environment, as well as registering them with Consul. To enable communication between HCP Consul Dedicated control plane and services in the data plane, you must create a [HashiCorp Virtual Network (HVN)](/hcp/docs/hcp/network) and peer your HVN to AWS or Azure. To learn more about the process for each cloud environment, refer to [Create and manage an HVN for AWS](/hcp/docs/hcp/network/hvn-aws/hvn-aws) or [Create and manage an HVN for Azure](/hcp/docs/hcp/network/hvn-azure/hvn-azure). + +The following diagrams describe the architecture for connections between HCP Consul Dedicated clusters and dataplane components hosted in a user-managed environment: + + + + +![Diagram of peering architecture for HCP Consul Dedicated on AWS](/img/docs/consul/hcp-consul-aws-architecture.png) + + + + +![Diagram of peering architecture for HCP Consul Dedicated on Azure](/img/docs/consul/hcp-consul-azure-architecture.png) + + + + +When deploying a HCP Consul Dedicated cluster, the [cluster tier](/hcp/docs/consul/concepts/cluster-tiers) and cluster size you select affect the cluster's functionality and the number of service instances it can support. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/consul/concepts/cluster-tiers.mdx b/content/hcp-docs/content/docs/consul/concepts/cluster-tiers.mdx new file mode 100644 index 0000000000..2ce85a394c --- /dev/null +++ b/content/hcp-docs/content/docs/consul/concepts/cluster-tiers.mdx @@ -0,0 +1,134 @@ +--- +page_title: HCP Consul tiers +sidebar_title: Tiers +description: |- + Learn about the cluster tiers in HCP Consul: development, essentials, standard, and premium. Tiers support multi-region and multi-cloud HCP Consul deployments to connect AWS, Azure, and on-prem networks. +--- + +# HCP Consul Dedicated tiers + +@include 'alerts/consul-dedicated-eol.mdx' + +This page explains the concept of HCP Consul Dedicated cluster tiers. It describes how each tier supports multi-cloud production deployments, the billing models the tiers are available on, and the available deployment sizes for clusters at each tier. Deployments are measured in number of service instances they support. + +When you create a new HCP Consul Dedicated cluster, you can choose between four cluster tiers: [development](#development-tier), [essentials](#essentials-tier), [standard](#standard-tier), and [premium](#premium-tier). The tier you choose determines the cluster's connectivity when using [WAN federation](/hcp/docs/consul/extend/federation) and [cluster peering](/hcp/docs/consul/extend/cluster-peering). + +## Background + +When you create a HCP Consul Dedicated Consul server, HashiCorp bootstraps a Consul server and uses your peered HashiCorp Virtual Network (HVN) to connect the Consul server to your cloud provider. This process simplifies the operations required to enable Consul's service discovery and service mesh features for services already deployed in AWS or Azure environments. + +When you create a HCP Consul Dedicated server, you must choose one of four cluster tiers for your server. Each tier has size options to select from. A _cluster tier_ is different from a _cluster size_. While cluster size determines how many service instances the Consul server can support, the cluster tier determines the server's ability to participate in multi-region and multi-cloud deployments. You cannot change a HCP Consul Dedicated cluster's size after creation, so when you select a tier it is important to know the size of the deployment you expect on each cluster. + +Tiers vary in price according to their offerings. Essentials tier clusters and standard tier on AWS are available to all users. Standard clusters on Azure and premium tier clusters on both AWS and Azure require an annual entitlement contract or flex billing subscription. Refer to [billing overview](/hcp/docs/hcp/admin/billing) for more information about billing models and terms. + +For pricing information on product tiers, refer to [HCP Consul Dedicated pricing](https://www.hashicorp.com/products/consul/pricing). + +## Cluster tiers + +The following cluster tiers are available in HCP Consul Dedicated: + +- [Development tier](#development-tier) +- [Essentials tier](#essentials-tier) +- [Standard tier](#standard-tier) +- [Premium tier](#premium-tier) + +Tiers differ in their support for multi-region and multi-cloud deployments. Their availability is tied to specific billing models. The best tier to select depends on your organization's specific needs. + +The following table summarizes the differences between the cluster tiers: + +| Cluster tier | Supported billing models | Single region support | Single cloud provider support | Multi-region support | Multi-cloud support | Production ready | +| :--------------- | :-------------------------------------------------------------------------------- | :-------------------: | :---------------------------: | :------------------: | :-----------------: | :--------------: | +| Development tier | Trial
Pay as you go
Entitlement contract
Flex | ✅ | ✅ | ✅ | ✅ | ❌ | +| Essentials tier | Entitlement contract
Flex | ✅ | ✅ | ❌ | ❌ | ✅ | +| Standard tier | Entitlement contract
Flex | ✅ | ✅ | ✅ | ❌ | ✅ | +| Premium tier | Entitlement contract
Flex | ✅ | ✅ | ✅ | ✅ | ✅ | + +### Development tier + +Development tier clusters are available to all users. They support all Consul features across HCP enabled regions and cloud providers. Development tier clusters are designed for evaluation and testing purposes. They are deployed with a single node and are not suitable for production environments. + +| Supported billing models | Supported cloud providers | Multi-region support | Multi-cloud support | Production ready | +| :---------------------------------------------------------------------------------------------: | :-----------------------: | :------------------: | :-----------------: | :--------------: | +| Trial
Pay as you go
Entitlement contract
Flex | AWS
Azure | ✅ | ✅ | ❌ | + +The following size options are available for development tier clusters: + +| Cluster size | Service instances | +| :----------: | :-----------------: | +| Extra small | 1 - 50 | + +### Essentials tier + +Essentials tier clusters are full-featured, production ready clusters best suited for single region workloads. + +For example, a HCP Consul Dedicated essentials tier cluster deployed in AWS `us-east-1` can establish a cluster peering connection with any of the following clusters: + +- an essentials, standard, or premium tier cluster deployed in AWS `us-east-1`. +- a self-managed Enterprise cluster deployed in AWS `us-east-1`. + +WAN federation between essentials tier clusters is restricted to federating HCP Consul Dedicated clusters in the same AWS region. WAN federation and cluster peering cannot be used on the same cluster concurrently. + +| Supported billing models | Supported cloud providers | Multi-region support | Multi-cloud support | Production ready | +| :------------------------------------------: | :-----------------------: | :------------------: | :-----------------: | :--------------: | +| Entitlement contract
Flex | AWS
Azure | ❌ | ❌ | ✅ | + +The following size options are available for essentials tier clusters: + +| Cluster size | Service instances | +| :----------: | :-----------------: | +| Small | 10 - 500 | +| Medium | 501 - 2,500 | +| Large | 2,501 - 10,000 | + +HCP Consul Dedicated deploys essentials tier clusters with three server nodes. To provide high availability, HCP Consul Dedicated deploys each node in a separate availability zone. + +### Standard tier + +Standard tier clusters are full-featured, production ready clusters best suited for multi-region workloads in a single cloud provider. + +For example, a HCP Consul Dedicated standard tier cluster deployed in AWS `us-east-1` can establish a cluster peering connection with any of the following clusters: + +- an essentials, standard, or premium tier cluster deployed in AWS `us-east-1`. +- a standard or premium tier cluster deployed in AWS `us-west-2`. +- a self-managed Enterprise cluster deployed in AWS `us-west-2`. + +WAN federation between standard tier clusters is restricted to federating HCP Consul Dedicated clusters across AWS regions. WAN federation and cluster peering cannot be used on the same cluster concurrently. + +| Supported billing models | Supported cloud providers | Multi-region support | Multi-cloud support | Production ready | +| :--------------------------------------------: | :-----------------------: | :------------------: | :-----------------: | :--------------: | +| Entitlement contract
Flex | AWS
Azure | ✅ | ❌ | ✅ | + +The following size options are available for standard tier clusters: + +| Cluster size | Service instances | +| :----------: | :-----------------: | +| Small | 10 - 500 | +| Medium | 501 - 2,500 | +| Large | 2,501 - 10,000 | + +HCP Consul Dedicated deploys standard tier clusters with three server nodes. To provide high availability, HCP Consul Dedicated deploys each node in a separate availability zone. + +### Premium tier + +Premium tier clusters are full-featured, production ready clusters best suited for multi-region workloads in any HCP supported cloud providers. + +For example, a HCP Consul Dedicated premium tier cluster deployed in AWS `us-east-1` can establish a cluster peering connection with any of the following clusters: + +- an essentials, standard, or premium tier cluster deployed in AWS `us-east-1`. +- a standard or premium tier cluster deployed in AWS `us-west-2`. +- a premium tier cluster deployed in Azure `CentralUS`. +- a self-managed Enterprise cluster deployed in an on-premises data center. + +| Supported billing models | Supported cloud providers | Multi-region support | Multi-cloud support | Production ready | +| :--------------------------------------------: | :-----------------------: | :------------------: | :-----------------: | :--------------: | +| Entitlement contract
Flex | AWS
Azure | ✅ | ✅ | ✅ | + +The following size options are available for premium tier clusters: + +| Cluster size | Service instances | +| :----------: | :-----------------: | +| Small | 10 - 500 | +| Medium | 501 - 2,500 | +| Large | 2,501 - 10,000 | + +HCP Consul Dedicated deploys premium tier clusters with three server nodes. To provide high availability, HCP Consul Dedicated deploys each node in a separate availability zone. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/consul/concepts/consul-central.mdx b/content/hcp-docs/content/docs/consul/concepts/consul-central.mdx new file mode 100644 index 0000000000..bfad3d5b05 --- /dev/null +++ b/content/hcp-docs/content/docs/consul/concepts/consul-central.mdx @@ -0,0 +1,15 @@ +--- +page_title: HCP Consul Central +description: |- + HCP Consul Central was a hosted management plane service that was deprecated in November 2024 +--- + +# HCP Consul Central + +HCP Consul Central was a hosted management plane service available through the HashiCorp Cloud Platform that supported centralized global management operations across Consul clusters. HCP Consul Central was deprecated on November 6, 2024 and is no longer available. + +To replace the visibility gained from HCP Consul Central, we recommend the [Grafana observability dashboards in the Consul documentation](/consul/docs/connect/observability/grafanadashboards). + +You can still use the Consul UI for cluster-level management, but there are no equivalent tools for global control. + +This decision is part of our ongoing efforts to streamline our offerings and enhance our focus on delivering the best possible solutions to our customers. If you have questions or concerns, reach out to your account team or [submit a support ticket](https://support.hashicorp.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/content/hcp-docs/content/docs/consul/concepts/network-topologies.mdx b/content/hcp-docs/content/docs/consul/concepts/network-topologies.mdx new file mode 100644 index 0000000000..cf9f56361d --- /dev/null +++ b/content/hcp-docs/content/docs/consul/concepts/network-topologies.mdx @@ -0,0 +1,112 @@ +--- +page_title: Network topologies +description: |- + On HCP, network connectivity between clusters depends on whether clusters are deployed to the same HVN, same cloud environment, or different cloud environments. Learn about the supported network topologies and their requirements. +--- + +# Network topologies + +@include 'alerts/consul-dedicated-eol.mdx' + +This page explains the supported network topologies for establishing cluster peering connections. The location of services across clusters, HashiCorp Virtual Networks, and cloud providers affect the network connectivity requirements, which are determined by the [cluster tier](/hcp/docs/consul/concepts/cluster-tiers) you select when deploying a HCP Consul Dedicated cluster. + +## Peer HCP Consul Dedicated clusters on same HVN + +If you have two HCP Consul Dedicated clusters on the same HVN, they already share network connectivity. As a result, you can establish cluster peering between the clusters. + +![Diagram of two HCP Consul Dedicated clusters on a single HVN with cluster peering](/img/docs/consul/cluster-tiers/two-hcp-managed-one-hvn.png) + +This diagram describes the cluster peering topology for HCP Consul Dedicated [essentials tier clusters](/hcp/docs/consul/concepts/cluster-tiers#essentials-tier) deployed in a single HVN. + +## Peer HCP Consul Dedicated clusters on different HVNs + +If you have HCP Consul Dedicated clusters deployed on two different HVNs, the HVNs will be automatically peered when establishing a cluster peering connection. + +If the HVNs are in different regions, both cluster tiers must be Standard or higher. In non-production settings, you can also use development tier clusters to evaluate and test multi-region cluster peering connections. + + + +Cross project HVN peering is currently not supported. + + + +![Diagram of two HCP Consul Dedicated clusters on AWS HVN with cluster peering between two HVNs in two regions](/img/docs/consul/cluster-tiers/two-hcp-managed-two-regions-hvns.png) + +This diagram describes the cluster peering topology for HCP Consul Dedicated [standard tier clusters](/hcp/docs/consul/concepts/cluster-tiers#standard-tier) deployed on AWS in two HVNs in different regions. + +## Peer HCP Consul Dedicated clusters and self-managed Community and Enterprise clusters (single-cloud) + +If you have an HCP Consul Dedicated cluster and a self-managed Community or Enterprise cluster deployed on the same cloud provide, you must peer the HVN and the cloud network (VPC or VNet) before you can establish cluster peering connections. Refer to [HCP AWS peering connections](/hcp/docs/hcp/network/hvn-aws/hvn-peering) or [HCP Azure peering connections](/hcp/docs/hcp/network/hvn-azure/hvn-peering) to learn how to peer the HVN. + +If the HVN and your cloud network are in the same region, both cluster tiers must be Essentials or higher. If the HVN and your cloud network are in different regions, both cluster tiers must be Standard or higher. + +![Diagram of HCP Consul Dedicated cluster and self-managed Community or Enterprise cluster with cluster peering between HVN and VPC in two regions](/img/docs/consul/cluster-tiers/hcp-managed-self-managed-two-regions-one-cloud.png) + +This diagram describes the cluster peering topology for HCP Consul Dedicated and self-managed [standard tier clusters](/hcp/docs/consul/concepts/cluster-tiers#standard-tier) deployed on AWS in two different regions. + +## Peer Consul clusters on different clouds (multi-cloud) + +There are two ways to peer HCP Consul Dedicated and self-managed Community and Enterprise clusters on AWS and Azure. + +- Peer Consul clusters [through public IPs](#peer-consul-clusters-through-public-ip). This method is best for networks that require public access. +- Peer Consul clusters [through mesh gateways](#peer-consul-clusters-through-mesh-gateways). This method is the most secure method for production environments. + +Because the Consul clusters are deployed in different clouds, both cluster tiers must be Premium. In non-production settings, you can also use development tier clusters to evaluate and test multi-cloud cluster peering connections. + +### Peer Consul clusters through public IP + +If you have two publicly accessible Consul clusters deployed in AWS and Azure, you can establish a cluster peering connection between the clusters using their public IPs. This option is best suited for connecting two development tier clusters to evaluate cluster peering with HCP clusters. You can also [secure your clusters with IP allowlist](/hcp/docs/consul/extend/cluster-peering/establish#secure-access-with-ip-allowlist) to limit connections between the peered clusters according to IP CIDRs. + +When you create a cluster peering connection, you have the option to include one or more server addresses. This address is the public IP where the server is available. This field auto-populates for public clusters. + +When using self-managed Community and Enterprise clusters, an alternative option is to expose the cluster to the outside network through external load balancers. Then, you can use the loadbalancer's DNS or IP address when creating a peering token. Refer to the [`ServerExternalAddresses` documentation](/consul/api-docs/peering#serverexternaladdresses) for more information. + +![Diagram of two HCP Consul Dedicated clusters deployed to different HVNs and different cloud providers with public IP connection](/img/docs/consul/cluster-tiers/two-hcp-managed-two-cloud-public-ip.png) + +This diagram describes the cluster peering topology for two HCP Consul Dedicated [premium tier clusters](/hcp/docs/consul/concepts/cluster-tiers#premium-tier) deployed to separate HVNs on AWS and Azure, with a public IP connection between the two. + +### Peer Consul clusters through mesh gateways + +If you have Consul clusters deployed separately in AWS and Azure that you want to connect without exposing them to the public network, establish a cluster peering connection through mesh gateways deployed to each environment. This method supports connections between two HCP Consul Dedicated clusters in different clouds, connections between HCP Consul Dedicated and self-managed Community and Enterprise clusters, and connections between two self-managed Community and Enterprise clusters. Mesh gateways must be deployed to the default partition in both workload environments. + +To use cluster peering through mesh gateways, you must first peer the HVNs and your cloud networks. Refer to [HCP AWS peering connections](/hcp/docs/hcp/network/hvn-aws/hvn-peering) or [HCP Azure peering connections](/hcp/docs/hcp/network/hvn-azure/hvn-peering) to learn how to peer the HVN. + +![Diagram of two HCP Consul Dedicated clusters deployed to different HVNs and different cloud providers connected through mesh gateways](/img/docs/consul/cluster-tiers/two-hcp-managed-two-cloud-mesh-gateway.png) + +This diagram describes the cluster peering topology for two HCP Consul Dedicated [premium tier clusters](/hcp/docs/consul/concepts/cluster-tiers#premium-tier) deployed to separate HVNs on AWS and Azure that are connected between mesh gateways. Each mesh gateway is deployed to the `default` partition in the VPC or VNet that has an established connection to the HVN. + +If you use Consul on Kubernetes, you must enable `peerThroughMeshGateways` in the `Mesh` CRD and set the mode to `local` in the `ProxyDefaults` CRD. Refer to the following configuration examples: + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: Mesh +metadata: + name: mesh +spec: + peering: + peerThroughMeshGateways: true +``` + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ProxyDefaults +metadata: + name: global + namespace: consul +spec: + config: + protocol: http + expose: {} + meshGateway: + mode: local +``` + + + +Refer to [mesh gateway configuration for Kubernetes](/consul/docs/k8s/connect/cluster-peering/tech-specs#mesh-gateway-configuration-for-kubernetes) for more information. diff --git a/content/hcp-docs/content/docs/consul/dedicated/access.mdx b/content/hcp-docs/content/docs/consul/dedicated/access.mdx new file mode 100644 index 0000000000..9321274511 --- /dev/null +++ b/content/hcp-docs/content/docs/consul/dedicated/access.mdx @@ -0,0 +1,63 @@ +--- +page_title: Access HCP Consul Dedicated clusters +description: |- + To access and interact with HCP Consul Dedicated clusters, use either the HTTP API or the Consul UI. Learn how to get the access URL, generate an admin token, and connect to the Consul cluster. +--- + +# Access HCP Consul Dedicated clusters + +@include 'alerts/consul-dedicated-eol.mdx' + +After creating a HCP Consul Dedicated cluster, HCP provides both public and private URLs that you can use to access your cluster. You can also generate an admin token to access your cluster through the HTTP API or the Consul UI. + +## Get access URL + +HCP provides the URL to access the Consul UI or make API calls to the Consul server. To access the URL, complete the following steps: + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization or project where you created the cluster you want to access. +1. Click **Consul**. +1. From the Consul Overview, click the cluster ID you want to deploy clients with. +1. Click **Access Consul**. + +HCP Consul Dedicated has a URL for the _public address_ and a URL for the _private address_. The public address allows access from any location over the public internet. The public address is most suitable for testing, development, and debugging scenarios. The private address limits access to connected networks. In production scenarios, we recommend using only the private address. + +## Generate admin token + +To authenticate access to a HCP Consul Dedicated cluster, HCP provides an admin token that gives you the unlimited privileges for interacting with your cluster. You can use this token to access the Consul UI, deploying client agents with the Consul API, or interact with Consul using the CLI. + +To generate the admin token, complete the following steps: + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization or project where you created the cluster you want to access. +1. Click **Consul**. +1. From the Consul Overview, click the cluster ID you want to deploy clients with. +1. Click **Access Consul** and then **Generate admin token**. + +HCP provides an admin token that you can copy and use to access the Consul cluster. You cannot access the admin token again after closing the window that contains it. If you lose the admin token, you must generate a new one. + +## Connect to the Consul HTTP API + +To connect to the HTTP API for a HCP Consul Dedicated cluster, export the URL and token: + +```shell-session +$ export CONSUL_HTTP_ADDR=https://mycluster.consul.cc9a0090-3400-xxx-a3d2-xxx.aws.hashicorp.cloud && CONSUL_HTTP_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx +``` + +Next, run an API call to the desired endpoint. For example, the following command returns the trusted certificate authority (CA) root certificates for the HCP Consul Dedicated cluster. + +```shell-session +$ curl --header "X-Consul-Token: ${CONSUL_HTTP_TOKEN}" ${CONSUL_HTTP_ADDR}/v1/agent/connect/ca/roots +``` + +## View the Consul UI + +You can use the HCP interface to view the cluster's built-in Consul UI. To view the UI, complete the following steps: + +1. Open the access URL in your browser. +1. Click **Log in**. +1. Paste the admin token and then click **Log in**. + +The Consul UI appears. The default page lists the services deployed to the cluster. + +For more information about navigating the Consul UI and using Consul features, refer to the [Consul documentation](/consul/docs). diff --git a/content/hcp-docs/content/docs/consul/dedicated/clients.mdx b/content/hcp-docs/content/docs/consul/dedicated/clients.mdx new file mode 100644 index 0000000000..bcb2a91377 --- /dev/null +++ b/content/hcp-docs/content/docs/consul/dedicated/clients.mdx @@ -0,0 +1,134 @@ +--- +page_title: Deploy clients +description: |- + Learn how to deploy Consul clients and register services with HCP Consul Dedicated clusters in HashiCorp Cloud Platform (HCP). +--- + +# Deploy Consul clients + +You can deploy Consul clients in your infrastructure environment and connect them to the Consul servers managed in HashiCorp Cloud Platform (HCP). + +As of Consul v1.14.0, Kubernetes deployments use Consul dataplanes instead of client agents. If you use Kubernetes with HCP Consul Dedicated, refer to [Deploy Consul dataplanes](/hcp/docs/consul/dedicated/dataplanes). For more information about what a Consul dataplane is and how it works, refer to [Simplified service mesh with Consul Dataplane in the Consul documentation](/consul/docs/connect/dataplane). + +## Prerequisites + +Before you deploy Consul clients, you need to create: + +- A HashiCorp virtual network (HVN) + - [Create and Manage an HVN on AWS](/hcp/docs/hcp/network/hvn-aws/hvn-aws) + - [Create and Manage an HVN on Azure](/hcp/docs/hcp/network/hvn-azure/hvn-azure) +- A peering connection or transit gateway attachment + - [Peering Connections on AWS](/hcp/docs/hcp/network/hvn-aws/hvn-peering) + - [Transit Gateway Attachments on AWS](/hcp/docs/hcp/network/hvn-aws/tgw-attach) + - [Peering Connections on Azure](/hcp/docs/hcp/network/hvn-azure/hvn-peering) +- Routes for directing network traffic between the HVN and a target connection + - [Routes on AWS](/hcp/docs/hcp/network/hvn-aws/routes) + - [Routes on Azure](/hcp/docs/hcp/network/hvn-azure/routes) +- A [Consul server cluster](/hcp/docs/consul/dedicated/create) + +### Network latency + +Before deploying clients, ensure that there is sufficient latency between the HashiCord-managed servers and the user-manged environment. Consul uses the [gossip protocol](/consul/docs/architecture/gossip) to share information across agents. To function properly, you cannot exceed the protocol’s maximum latency threshold. + +The latency threshold is calculated according to the total round trip time (RTT) for communication between servers and clients. + +For data sent from a Consul client to an HCP Consul Dedicated server: + +- Average RTT for all traffic cannot exceed 50ms. +- RTT for 99 percent of traffic cannot exceed 100ms. + +We recommend deploying Consul clients to the same region as your HCP Consul Dedicated cluster's HVN to avoid potential cross-region network interruptions. + +The following websites can help you determine latency between your HVN and workload regions where Consul client agents are deployed: + +- [AWS Latency Monitoring website](https://www.cloudping.co/grid/p_50/timeframe/1M). Contains for real-time information about AWS region latency. +- [Azure network round-trip latency statistics](https://docs.microsoft.com/en-us/azure/networking/azure-network-latency). Contains updated information about Azure region latency. + +## Download configuration files + +After you create a HCP Consul Dedicated cluster, you can download a `.zip` file that contains the credentials required to deploy clients and register services with the Consul servers. This file contains the following credentials: + +1. `client_config.json` contains a pre-configured client agent configuration to register services +1. `ca.pem` contains the data necessary to authorize TLS connections with Consul's certificate authority. + +To protect your cluster from unwanted connections, you should keep both the configuration file and the certificate in a secure location. + +To download the configuration files, complete the following steps: + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization or project where you created the cluster you want to delete. +1. Click **Consul**. +1. From the Consul Overview, click the cluster ID you want to deploy clients with. +1. Click **Access Consul** and then **Download Consul configuration files**. + +### Client configuration file + +HCP Consul Dedicated automatically generates a client configuration file with the parameters and values necessary to deploy clients to your HCP Consul Dedicated cluster. Values unique to your cluster include the following: + +- The cluster ID functions as the name of the datacenter +- An unecrypted gossip key for servers and clients to securely communicate with each other +- The cluster's private address +- The path to the CA file + +The following codeblock contains an example of a client configuration file generated by HCP Consul Dedicated, including the placement of values specific to an individual cluster: + +```json +{ + "acl": { + "enabled": true, + "down_policy": "async-cache", + "default_policy": "deny" + }, + "datacenter": "", + "encrypt": "", + "encrypt_verify_incoming": true, + "encrypt_verify_outgoing": true, + "server": false, + "log_level": "INFO", + "ui": true, + "retry_join": [ + "" + ], + "auto_encrypt": { + "tls": true + }, + "tls": { + "defaults": { + "ca_file": "<./ca.pem>", + "verify_outgoing": true + } + } +} +``` + +## Register services + +To register services with a HCP Consul Dedicated cluster, add service definitions to the `client_config.json` configuration file and then register the service with the HTTP API. + +Refer to the following topics in the Consul documentation for more information about defining a service: + +- [Services configuration reference](/consul/docs/services/configuration/services-configuration-reference) contains complete specification information for a service definition. +- [Agents overview](/consul/docs/agent) describes the agent requirements and usage. It also includes an [example configuration file](/consul/docs/agent#client-node-registering-a-service) that defines both a client agent and a service in a single file. + +To register the service, send a `PUT` request with the name of the service definition file to the `/agent/service/register` endpoint. The following example request registers the service defined in a file named `service.json`: + +```shell-session +$ curl --request PUT --data @service.json http://localhost:8500/v1/agent/service/register + ``` + +Refer to [Service - Agent HTTP API](/consul/api-docs/agent/service) in the API documentation for more information about this endpoint. + +## Create service intentions + +After registering your services with the HCP Consul Dedicated servers, you must create intentions to authorize communication between the services. Refer to [create and manage intentions](/consul/docs/connect/intentions/create-manage-intentions) in the Consul documentation for more information. + +For additional guidance, complete the [Manage Service Access Permission with Intentions](/consul/tutorials/get-started-hcp/hcp-gs-intentions) tutorial in the [Get Started with HCP Consul Dedicated](/consul/tutorials/get-started-hcp) collection. + +## Additional resources + +You can automate the process to deploy clients and register services using Terraform. The steps vary depending on your cloud provider. The following tutorials contain step-by-step guidance for deploying clients on virtual machines: + +- [Connect a Consul client to AWS VM tutorial](/hcp/tutorials/consul-cloud/consul-client-aws-ec2) +- [Connect a Consul client to Azure VM Tutorial](/hcp/tutorials/consul-cloud/consul-client-azure-virtual-machines) + +You can also deploy clients using the AWS Elastic Container Service (ECS). Refer to the [Serverless Consul service mesh with ECS and HCP tutorial](/consul/tutorials/cloud-production/consul-ecs-hcp) for more information. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/consul/dedicated/create.mdx b/content/hcp-docs/content/docs/consul/dedicated/create.mdx new file mode 100644 index 0000000000..ae5f9a677d --- /dev/null +++ b/content/hcp-docs/content/docs/consul/dedicated/create.mdx @@ -0,0 +1,166 @@ +--- +page_title: Create a HCP Consul Dedicated cluster +description: |- + Learn how to create a HCP Consul Dedicated Consul cluster using the HashiCorp Cloud Platform (HCP) interface. +--- + +# Create a HCP Consul Dedicated cluster + +@include 'alerts/consul-dedicated-eol.mdx' + +This page describes how to create *HCP Consul Dedicated clusters*, which are one or more Consul server agents that are installed, configured, and managed for you by HashiCorp. + +## Prerequisites + +Before you create a Consul cluster, configure the following HCP components: + +- A HashiCorp virtual network (HVN) + - [Create and Manage an HVN on AWS](/hcp/docs/hcp/network/hvn-aws/hvn-aws) + - [Create and Manage an HVN on Azure](/hcp/docs/hcp/network/hvn-azure/hvn-azure) +- A peering connection or transit gateway attachment + - [Peering Connections on AWS](/hcp/docs/hcp/network/hvn-aws/hvn-peering) + - [Transit Gateway Attachments on AWS](/hcp/docs/hcp/network/hvn-aws/tgw-attach) + - [Peering Connections on Azure](/hcp/docs/hcp/network/hvn-azure/hvn-peering) +- Routes for directing network traffic between the HVN and a target connection + - [Routes on AWS](/hcp/docs/hcp/network/hvn-aws/routes) + - [Routes on Azure](/hcp/docs/hcp/network/hvn-azure/routes) + +## Create a Consul cluster + + + + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization or project where you want to create the cluster. Because resources such as HVNs are associated with an individual project, you must create the cluster in the same project as the HVN peering that supports it. +1. Click **Consul**. +1. From the Consul Overview, click **Deploy Consul**. +1. Select **HCP Consul Dedicated** and then click **Get Started**. +1. Select the cloud provider where you host your services and then click **Next**. +1. Select **HCP UI Workflow** and then click **Next**. +1. Choose the HVN where you want to deploy your clusters. You should configure the HVN for the same environment where your Consul agents are deployed. If an appropriate HVN for your environment does not exist, click **Create new network** and then [create a new HVN](/hcp/docs/hcp/network). +1. Enter a name for the cluster in the **Cluster ID** field. The cluster ID is a unique identifier that cannot be used for other active HCP Consul Dedicated clusters. +1. Select a cluster tier. Each tier enables a different set of Consul server features. Refer to [cluster tiers](/hcp/docs/consul/concepts/cluster-tiers) for more information. +1. Select a cluster size. Cluster size is measured by the expected number of service instances the cluster supports. For example, a small cluster supports up to 500 service instances. For pricing information for each cluster size, refer to [HCP Consul Dedicated Pricing](https://www.hashicorp.com/products/consul/pricing). +1. Choose whether the cluster should be private or public. If you want to be able to access the Consul cluster UI from an external network, select **Public**. For production environments, we recommend using private Consul clusters. + + - Public access is less secure. We do not recommend enabling this option for production servers. + - For additional security, enable **Allow select IPs only**. This option lets you whitelist up to three IPV4 address ranges in CIDR notation. + +1. Choose the Consul version for your server agents. If you do not require a specific version, we recommend choosing the default option, which is the latest release of Consul. +1. Click **Create cluster**. + +HCP then begins cluster creation. It takes about 10 minutes for the operation to finish. Wait until your cluster is created before connecting clusters and deploying agents. + + + + + +HCP includes an option to generate Terraform code that you can run to create a Consul cluster in AWS. The Terraform code deploys Consul to either a new VPC or an existing VPC. It also deploys a demo application that you can interact with and observe in the Consul UI. + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization or project where you want to create the cluster. Because resources such as HVNs are associated with an individual project, you must create the cluster in the same project as the HVN peering that supports it. +1. Click **Consul**. +1. From the Consul Overview, click **Deploy Consul**. +1. Select **HCP Consul Dedicated** and then click **Get Started**. +1. Select **Amazon Web Services** and then click **Next**. +1. Select **Terraform automation** and then click **Next**. +1. If you need to create a new VPC for your cluster, click **Create new VPC**. Otherwise, continue with the configuration for **Use an existing VPC**. +1. Select a runtime for your cluster. HCP Consul Dedicated supports EKS, EC2, and ECS runtimes for clusters. +1. Choose a region from the **HCP Region** menu. This region is where you want to create the Consul server. If you are connecting to an existing VPC, we recommend creating the cluster in the same region as your VPC in order to reduce latency. +1. Under **VPC Region**, select a region. This region is where your existing VPC is located or where you want to create your new VPC. +1. Specify the networking information for the existing VPC. Specify the **VPC ID**, **Private route table ID**, **Private subnet 1**, and **Private subnet 2**, if applicable. +1. If you have not already generated them, click **Generate service principal and key**. Copy the provided code and run it in your CLI to export credentials. This step authenticates Terraform for interactions with HCP. For more information, refer to [Service Principals](/hcp/docs/hcp/admin/service-principals). +1. Copy the Terraform configuration file generated on HCP. Save this code as `main.tf` in your Terraform directory. +1. Initialize the directory: + + ```shell-session + $ terraform init + ``` + +1. Preview the Terraform plan: + + ```shell-session + $ terraform plan + ``` + +1. If no adjustments are necessary, apply the configuration to build the infrastructure: + + ```shell-session + $ terraform apply -auto-approve + ``` + +The building process takes about 10 minutes to complete. + +After Terraform starts, the cluster appears in the list of Consul clusters in HCP. You can click the name of your cluster to view details, connect a VPC to your cluster, and access the Consul cluster’s UI. + +Wait until cluster creation is complete before you proceed. + + + + +HCP includes an option to generate Terraform code that you can run to create a Consul cluster in Azure. The Terraform code deploys Consul to either a new VNet or an existing VNet. It also deploys a demo application that you can interact with and observe in the Consul UI. + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization or project where you want to create the cluster. Because resources such as HVNs are associated with an individual project, you must create the cluster in the same project as the HVN peering that supports it. +1. Click **Consul**. +1. From the Consul Overview, click **Deploy Consul**. +1. Select **HCP Consul Dedicated** and then click **Get Started**. +1. Select **Microsoft Azure** and then click **Next**. +1. Select **Terraform automation** and then click **Next**. +1. If you need to create a new VNet for your cluster, click **Create new VNet**. Otherwise, continue with the configuration for **Use an existing VNet**. +1. Select a runtime for your cluster. HCP Consul Dedicated supports AKS and Azure VM runtimes for clusters. +1. Choose a region from the **HCP Region** menu. This region is where you want to create the Consul server. If you are connecting to an existing VPC, we recommend creating the cluster in the same region as your VPC in order to reduce latency. +1. Specify the networking information for the existing VNet. Specify the **VNet name**, **Resource group name**, **Azure subscription ID**, **Subnet 1**, and **Subnet 2**, if applicable. +1. If you have not already generated them, click **Generate service principal and key**. Copy the provided code and run it in your CLI to export credentials. This step authenticates Terraform for interactions with HCP. For more information, refer to [Service Principals](/hcp/docs/hcp/admin/service-principals). +1. Copy the Terraform configuration file generated on HCP. Save this code as `main.tf` in your Terraform directory. +1. Initialize the directory: + + ```shell-session + $ terraform init + ``` + +1. Preview the Terraform plan: + + ```shell-session + $ terraform plan + ``` + +1. If no adjustments are necessary, apply the configuration to build the infrastructure: + + ```shell-session + $ terraform apply -auto-approve + ``` + +HCP begins to create the cluster. It takes about 10 minutes for the operation to finish. Wait until your cluster is created before connecting clusters and deploying agents. + +> **Tutorial:** For step-by-step guidance on deploying HCP Consul Dedicated on Azure using Terraform automation, refer to the [Deploy HCP Consul Dedicated on VMs using Terraform tutorial](/consul/tutorials/cloud-deploy-automation/consul-end-to-end-azure-vm). + + + + +## Troubleshooting + +You may encounter the following error when attempting to deploy a cluster in EKS using the code that HCP Consul provides: + + + +```log +Error: "exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1" +``` + + + +This error occurs when using outdated versions of the AWS CLI and IAM authenticator. Upgrade both to the latest version, and then run the code provided by HCP to complete the process. + +## Edit a cluster after creation + +After you create the cluster, you can change the cluster size, edit the select IPs that can access the cluster, and update the Consul version. However, you cannot modify the cluster name, tier, or HVN after creating a cluster. + +To edit an existing cluster, complete the following steps: + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization or project where created the cluster you want to edit. +1. Click **Consul**. +1. From the Consul Overview, next to the cluster you want to edit, click the and then **Edit cluster**. + +HashiCorp automatically updates your Consul clusters to fix common vulnerabilities and exposures (CVE). To learn more about upgradeing your Consul version, refer to [Upgrade your network](/hcp/docs/consul/upgrade). \ No newline at end of file diff --git a/content/hcp-docs/content/docs/consul/dedicated/dataplanes.mdx b/content/hcp-docs/content/docs/consul/dedicated/dataplanes.mdx new file mode 100644 index 0000000000..1b3771be1d --- /dev/null +++ b/content/hcp-docs/content/docs/consul/dedicated/dataplanes.mdx @@ -0,0 +1,150 @@ +--- +layout: docs +page_title: Deploy Consul dataplanes +description: >- + Consul dataplanes remove the need to a run client agents in your deployments. Learn about using Consul Dataplane with HCP Consul Dedicated servers. +--- + +# Deploy Consul dataplanes + +@include 'alerts/consul-dedicated-eol.mdx' + +This page provides usage information for running Consul dataplanes with HCP Consul Dedicated. Dataplanes enable communication between a HCP Consul Dedicated cluster and services running in a user-managed Kubernetes cluster. + +For more information, including architecture, features, and constraints, refer to [Simplified Service Mesh with Consul Dataplane](/consul/docs/connect/dataplane) in the Consul documentation. + +## Introduction + +Consul dataplanes are lightweight processes for managing Envoy proxies. They remove the need to run client agents on every node in a cluster by leveraging existing Kubernetes sidecar orchestration capabilities. As of Consul v1.14.0, Kubernetes deployments use Consul dataplanes instead of client agents by default. + +When using dataplanes with HCP Consul Dedicated, you run a configured Consul container in your Kubernetes cluster. This Consul instance connects to the external HCP Consul Dedicated servers and enables Consul to inject dataplanes into sidecar containers. + +## Prerequisites + +To deploy dataplanes with HCP Consul Dedicated, you must meet the following minimum version requirements: + +- Consul v1.14.0 +- Consul K8s v1.0.0 + +Refer to [Consul on Kubernetes Version Compatibility](/consul/docs/k8s/compatibility) for more information about Kubernetes version requirements with specifc Consul versions. + +## Deploy dataplanes with HCP Consul Dedicated + +Complete the following steps to connect services in your Kubernetes cluster to HCP Consul Dedicated servers: + +1. [Prepare your Kubernetes cluster](#prepare-your-kubernetes-cluster) +1. [Configure Consul for your Kubernetes cluster](#configure-consul-for-your-kubernetes-cluster) +1. [Install Consul on your Kubernetes cluster](#install-consul-on-your-kubernetes-cluster) +1. [Apply services to your Kubernetes cluster](#apply-services-to-your-kubernetes-cluster) + +These steps reflect the Consul documentation's guidance to [Join Kubernetes clusters to external Consul servers](/consul/docs/k8s/deployment-configurations/servers-outside-kubernetes). For more information about configuring Consul on Kubernetes, refer to the Consul documentation's [Helm Chart Reference](/consul/docs/k8s/helm). + +This page describes the process to deploy dataplanes and connect them to a HCP Consul Dedicated cluster that already exists in your organization. If you have not done so, refer to [Create a HCP Consul Dedicated cluster](/hcp/docs/consul/dedicated/create). + +### Prepare your Kubernetes cluster + +To automate the process for deploying dataplanes, you must create a Kubernetes Secret containing your [HCP Consul Dedicated cluster admin token](/hcp/docs/consul/dedicated/access#generate-admin-token) in the Kubernetes Namespace where you install Consul. This secret initializes the ACL system in the Consul workload scheduled on your Kubernetes cluster with credentials that enable secure access to the ACL system running on the HCP Consul Dedicated cluster. + +Complete the following steps to create the Secret in the `consul` Namespace on Kubernetes: + +1. Create a `consul` Namespace in your Kubernetes cluster. + + ```shell-session + $ kubectl create namespace consul + namespace/consul created + ``` + +1. Create a Kubernetes secret with your cluster's admin token. + + ```shell-session + $ kubectl create secret generic "consul-bootstrap-token" --from-literal="token=" --namespace consul + secret/consul-bootstrap-token created + ``` + +### Configure Consul for your Kubernetes cluster + +Set the following configurations in the Helm chart or `values.yaml` file: + + - The `global.enabled` value should be `false`. This setting disables all chart components by default so that each component is opt-in. + - The datacenter name must match the [cluster ID of your HCP Consul Dedicated cluster](/hcp/docs/consul/dedicated/reference#cluster-configuration-reference). + - The version of Consul in `global.image` should match the version running on the HCP Consul Dedicated cluster. + - The ACL system's `bootstrapToken` must invoke the `consul-bootstrap-token` Secret that contains the HCP Consul Dedicated cluster's admin token. + - Use `server.enabled: false` to disable server agent features. + - Configure an `externalServers` stanza with the [HCP Consul Dedicated cluster's IP address](/hcp/docs/consul/dedicated/access#get-access-url) and your K8s cluster's [API server URL](https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/). + +The following example demonstrates required values and their configurations: + + + +```yaml +global: + name: consul + enabled: false + datacenter: + image: "hashicorp/consul:" + acls: + manageSystemACLs: true + bootstrapToken: + secretName: consul-bootstrap-token + secretKey: token + tls: + enabled: true + enableConsulNamespaces: true +externalServers: + enabled: true + hosts: [""] + httpsPort: 443 + useSystemRoots: true + k8sAuthMethodHost: +server: + enabled: false +connectInject: + enabled: true +``` + + + +Refer to the official [Helm chart reference](/consul/docs/k8s/helm#configuration-values) for more information about values and their specifications. + +### Install Consul on your Kubernetes cluster + +Use either Helm or the `consul-k8s` CLI to apply the configuration to your Kubernetes cluster and deploy Consul. Be sure to install Consul in the Kubernetes Namespace that contains the `consul-bootstrap-token` secret. The `consul-k8s` CLI installs to the `consul` Namespace by default. + +If necessary, you can include a `--version` flag to install Consul according to the Helm chart or `consul-k8s` release that is compatible with your Kubernetes cluster. Refer to [Consul on Kubernetes Version Compatibility](/consul/docs/k8s/compatibility) for more information about Kubernetes version requirements with specifc Consul releases. + +Run either of the following commands for your preferred installation method: + + + + +```shell-session +$ helm install consul hashicorp/consul --values values.yaml --namespace consul --version "1.2.0" +``` + + + + +```shell-session +$ consul-k8s install -config-file=values.yaml +``` + + + + +The installation process should finish within a few minutes. + +After you install Consul on your Kubernetes cluster, Consul does not deploy dataplanes until you register a service with Consul. Refer to [Define services](/consul/docs/services/usage/define-services) in the Consul documentation for more information. + +## Upgrading + +Before you upgrade Consul to a version that uses Consul Dataplane, you must edit your Helm chart so that client agents are removed from your deployments. Refer to [upgrading to Consul Dataplane](/hcp/docs/k8s/upgrade#upgrading-to-consul-dataplanes) for more information. + +## Tutorials + +Consul Dataplane is supported on Kubernetes deployments. To learn more about using Kubernetes with HCP Consul Dedicated, refer to the following tutorials: + + - [Create HCP Consul Dedicated cluster for an existing EKS runtime](/consul/tutorials/cloud-deploy-automation/consul-end-to-end-existing-eks) + - [Deploy HCP Consul Dedicated with EKS using Terraform](/consul/tutorials/cloud-deploy-automation/consul-end-to-end-eks) + - [Deploy HCP Consul Dedicated with AKS using Terraform](/consul/tutorials/cloud-deploy-automation/consul-end-to-end-aks) + - [Connect an Elastic Kubernetes Service Cluster to HCP Consul Dedicated](/consul/tutorials/cloud-production/consul-client-eks) + - [Connect an Azure Kubernetes Service Cluster to HCP Consul Dedicated](/hcp/tutorials/consul-cloud/consul-client-aks) \ No newline at end of file diff --git a/content/hcp-docs/content/docs/consul/dedicated/delete.mdx b/content/hcp-docs/content/docs/consul/dedicated/delete.mdx new file mode 100644 index 0000000000..457635fbe7 --- /dev/null +++ b/content/hcp-docs/content/docs/consul/dedicated/delete.mdx @@ -0,0 +1,37 @@ +--- +page_title: Delete a HCP Consul Dedicated cluster +description: |- + Learn how to delete a HCP Consul Dedicated Consul cluster using the HashiCorp Cloud Platform (HCP) interface. +--- + +# Delete a HCP Consul Dedicated cluster + +@include 'alerts/consul-dedicated-eol.mdx' + +Deleting HCP Consul Dedicated clusters removes them from the list of clusters in an HCP project. HashiCorp also deletes all managed resources associated with the cluster. User-managed components, such as services and client agents, lose Consul functionality but are not removed from their cloud environment. + +## Delete a cluster + + + +When you delete an HCP Consul Dedicated cluster, the snapshots associated with the cluster are also removed. It is not possible to recover snapshots after they are removed. If you intend to restore a cluster, [use an API call to download the snapshot](/consul/api-docs/snapshot) before you delete the cluster. When you restore it, the new cluster’s name must match the name of the deleted cluster. + + + +To delete a HCP Consul Dedicated cluster using HCP, complete the following steps: + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization or project where you created the cluster you want to delete. +1. Click **Consul**. +1. From the Consul Overview, next to the cluster you want to delete, click **More** (three horizontal dots) and then click **Delete**. +1. To confirm, enter `DELETE`. Then click **Delete**. + +### Federated networks + +When a HCP Consul Dedicated cluster is the primary datacenter in a WAN federated network, HCP does not allow you to delete the cluster if it is still federated with secondary datacenters. Delete all of the secondary datacenters in the federation first, then delete the primary datacenter. + +### Cluster peering + +When you delete a cluster that has an active cluster peering connection with another cluster, HCP removes all data related to the peering connection from clusters. This data includes imported and exported services between clusters. If you recreate a cluster after deleting it, you must complete the full process to re-establish a cluster peering connection, including exporting services and configuring service intentions between clusters. + +Refer to [Establish cluster peering connections](/hcp/docs/consul/extend/cluster-peering/establish) for more information. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/consul/dedicated/index.mdx b/content/hcp-docs/content/docs/consul/dedicated/index.mdx new file mode 100644 index 0000000000..01bc260967 --- /dev/null +++ b/content/hcp-docs/content/docs/consul/dedicated/index.mdx @@ -0,0 +1,68 @@ +--- +page_title: HCP Consul Dedicated clusters overview +description: |- + This topic provides an overview for using HCP Consul Dedicated clusters. Learn how to create Consul servers that HashiCorp installs, bootstraps, and configures for you, how to access the servers, and how to connect clients or dataplanes to register services for service discovery and service mesh. +--- + +# HCP Consul Dedicated cluster overview + +@include 'alerts/consul-dedicated-eol.mdx' + +This topic provides an overview for using HCP Consul Dedicated server clusters in your Consul deployment. + +For more information about HCP Consul Dedicated clusters and how they differ from self-managed Community and Enterprise clusters, refer to [cluster management](/hcp/docs/consul/cluster-management). + +## Introduction + +Creating a HCP Consul Dedicated cluster simplifies the overall process of bootstrapping Consul servers. The HCP platform automates the following parts of a cluster's lifecycle: + +- Generating and distributing a gossip key between servers +- Starting the certificate authority and distributing TLS certificates to servers +- Bootstrapping the ACL system and saving tokens to a secure Vault environment +- Rotating expired TLS certificates after expiration +- Upgrading servers to new versions of Consul + +## Workflow + +To get started with HCP Consul Dedicated clusters, complete the following tasks in order: + +1. Create an HVN and connect it to your cloud environment. This task prepares your network so that you can establish communication between the Consul servers, which are hosted in a HCP Consul Dedicated environment, and your services, which are hosted in a user-managed environment. Refer to [Create and Manage an HVN](/hcp/docs/hcp/network/hvn-aws/hvn-aws) for more information. +1. [Create an HCP Consul Dedicated cluster](/hcp/docs/consul/dedicated/create). You can choose between using a guided UI workflow or generating an end-to-end Terraform configuration. +1. Get credentials and URLs to [access the cluster](/hcp/docs/consul/dedicated/access). HCP generates an admin token that you can use to view the Consul UI or make calls to the Consul HTTP API. +1. Depending on whether you use VMs or Kubernetes, [deploy Consul clients](/hcp/docs/consul/dedicated/clients) or [deploy Consul dataplanes](/hcp/docs/consul/dedicated/dataplanes) and register your services with the cluster. +1. Create and apply service intentions to secure communication in the service mesh. For additional guidance, refer to [Create and manage intentions](/consul/docs/connect/intentions/create-manage-intentions) in the Consul documentation. + +## Guidance + +The following resources are available to help you use HCP Consul Dedicated clusters. + +### Concepts and reference + +- [Cluster management](/hcp/docs/consul/concepts/cluster-management) explains the difference between HCP Consul Dedicated clusters and self-managed Community and Enterprise clusters. +- [Cluster tiers](/hcp/docs/consul/concepts/cluster-tiers) explains how the tier you select when creating a HCP Consul Dedicated cluster determines its multi-cloud functionality. +- [Cluster configuration reference](/hcp/docs/consul/dedicated/reference) provides reference information about cluster properties, including the [ports HCP Consul Dedicated clusters listen on](/hcp/docs/consul/dedicated/reference#cluster-server-ports). + +### Tutorials + +- [Deploy HCP Consul Dedicated](/consul/tutorials/get-started-hcp/hcp-gs-deploy) demonstrates the end-to-end deployment for a development tier cluster using the automated Terraform workflow. +- The following tutorials demonstrate the process to create an HVN and connect it to your cloud environment: + - [Hashicorp Virtual Network on Amazon Web Services](/hcp/docs/hcp/network/hvn-aws/hvn-aws) + - [Hashicorp Virtual Network on Microsoft Azure](/hcp/docs/hcp/network/hvn-azure/hvn-azure) +- The following tutorials demonstrate the process to deploy clients for services running on virtual machines: + - [Connect a Consul client to AWS VM](/hcp/tutorials/consul-cloud/consul-client-aws-ec2) + - [Connect a Consul client to Azure VM](/hcp/tutorials/consul-cloud/consul-client-azure-virtual-machines) +- The following tutorials demonstrate the process to deploy dataplanes for services running on Kubernetes using Terraform: + - [Create HCP Consul Dedicated cluster for an existing EKS runtime](/consul/tutorials/cloud-deploy-automation/consul-end-to-end-existing-eks) + - [Deploy HCP Consul Dedicated with EKS using Terraform](/consul/tutorials/cloud-deploy-automation/consul-end-to-end-eks) + - [Deploy HCP Consul Dedicated with AKS using Terraform](/consul/tutorials/cloud-deploy-automation/consul-end-to-end-aks) +- The following tutorials demonstrate the process to connect to services running in a Kubernetes using Helm: + - [Connect an Elastic Kubernetes Service Cluster to HCP Consul Dedicated](/consul/tutorials/cloud-production/consul-client-eks) + - [Connect an Azure Kubernetes Service Cluster to HCP Consul Dedicated](/hcp/tutorials/consul-cloud/consul-client-aks) + +### Usage documentation + +- [Create an HCP Consul Dedicated cluster](/hcp/docs/consul/dedicated/create) +- [Access an HCP Consul Dedicated cluster](/hcp/docs/consul/dedicated/access) +- [Delete an HCP Consul Dedicated cluster](/hcp/docs/consul/dedicated/delete) +- [Deploy Consul clients](/hcp/docs/consul/dedicated/clients) +- [Deploy Consul dataplanes](/hcp/docs/consul/dedicated/) \ No newline at end of file diff --git a/content/hcp-docs/content/docs/consul/dedicated/reference.mdx b/content/hcp-docs/content/docs/consul/dedicated/reference.mdx new file mode 100644 index 0000000000..1330d60111 --- /dev/null +++ b/content/hcp-docs/content/docs/consul/dedicated/reference.mdx @@ -0,0 +1,40 @@ +--- +page_title: HCP Consul Dedicated cluster configuration reference +description: |- + Learn about the cluster properties you can configure when creating HCP Consul Dedicated clusters. +--- + +# HCP Consul Dedicated cluster configuration reference + +@include 'alerts/consul-dedicated-eol.mdx' + +This page provides reference information about the properties of HCP Consul Dedicated clusters. + +Refer to [Create a HCP Consul Dedicated cluster](/hcp/docs/consul/dedicated/create) for instructions and additional guidance. + +## Cluster configuration reference + +The following configuration options are available for HCP Consul Dedicated clusters: + +| Attribute | Description | Default | Can edit after cluster creation | +| --------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------- | :-----------------------------: | +| Cluster ID | A name that serves as a unique identifier for your cluster. This value cannot be the same as other active HCP Consul Dedicated clusters. | `consul-cluster` | ❌ | +| Cluster size | Specifies the vCPU and GiB RAM configurations. Each size corresponds to a number of service instances. Extra small is only available on the development tier. | Small | ✅ | +| Cluster tier | Four tiers are available:

**Development:** Lowest tier designed for testing purposes.

**Essentials:** Middle tier designed for production workloads located in a single region.

**Standard:** Higher tier designed for production workloads spanning multiple regions.

**Premium:** Highest tier designed for production workloads spanning multiple cloud providers.

For more details, refer to [cluster tiers](/hcp/docs/consul/concepts/cluster-tiers). | Development | ❌ | +| Consul version | Specifies the Consul version deployed to the cluster. If your environment does not require a specific version, we recommend you use the default version. | Latest GA version | ✅ | +| Network | Specifies the HVN that contains the cluster. | Defaults to the first HVN listed in HCP. | ❌ | +| Network accessibility | Enables access to the Consul UI through a public endpoint. For production use cases, we recommend that you disable public accessibility. | Disabled | ❌ | + +## Cluster server ports + +HCP Consul Dedicated clusters listen on the following ports: + +| Type | Port | Protocol | Description | +| -------- | ---- | ----------- | ------------------------------------------ | +| HTTPS | 443 | TCP | API and UI | +| RPC | 8300 | TCP | RPC requests from other agents (TLS) | +| Serf LAN | 8301 | TCP and UDP | LAN gossip | +| Serf WAN | 8302 | TCP and UDP | WAN gossip | +| gRPC | 8502 | TCP | gRPC requests from Consul Dataplanes (TLS) | + +Refer to [Consul Required Ports for more information](/consul/docs/install/ports). diff --git a/content/hcp-docs/content/docs/consul/extend/cluster-peering/establish.mdx b/content/hcp-docs/content/docs/consul/extend/cluster-peering/establish.mdx new file mode 100644 index 0000000000..24a004ab3a --- /dev/null +++ b/content/hcp-docs/content/docs/consul/extend/cluster-peering/establish.mdx @@ -0,0 +1,79 @@ +--- +page_title: Establish cluster peering connections +description: |- + To establish cluster peering connections between HCP Consul Dedicated clusters, connect clusters belonging to compatible tiers. Learn how to create connections, confirm a connection's status, export services, and secure access with the IP allowlist. +--- + +# Establish cluster peering connections + +@include 'alerts/consul-dedicated-eol.mdx' + +This page describes how to establish cluster peering connections. When you create and establish cluster peering connections with the dedicated UI workflow, information about the connections become visible in the HCP platform. + +## Introduction + +In traditional self-managed deployments, the process to establish a cluster peering connection between clusters requires access to the Consul CLI to create and pass peering tokens to other clusters. + +You can [use the HCP UI](/hcp/docs/consul/extend/cluster-peering/establish#create-a-cluster-peering-connection) to peer HCP Consul Dedicated clusters in the same HCP project. + +The overall process for establishing a cluster peering connection consists of the following steps: + +1. Create a cluster peering connection +1. Check peering connection status +1. Export services between clusters + +After you establish a cluster peering connection, you can use the UI to view the connection's status, a list of exported services, and available imported services. You can also secure cluster access using the IP allowlist. + +## Requirements + +- Consul v1.14.2 or later +- Two or more clusters with [compatible cluster tiers](/hcp/docs/consul/concepts/cluster-tiers) + +## Create a cluster peering connection + +1. From the Consul overview, click **Cluster peering**. +1. Click **Create cluster peering connection**. +1. Use the dropdown menus to select a cluster and an admin partition to use for cluster peering. +1. Repeat the process by selecting the cluster ID and admin partition of the desired peer. +1. Click **Create**. + +If the cluster you select is a publicly available self-managed Community or Enterprise cluster, you have the option to turn on **Include server address** and enter that cluster's public IP. For more information about using public IPs, refer to [cluster peering topologies](/hcp/docs/consul/concepts/network-topologies). + +## Check peering connection status + +After you create the cluster peering connection, it becomes visible. Wait for the status of your cluster peering connection to change to **Active**. + +## Export services between clusters + +After you create a cluster peering connection and its status is **Active**, you can export services to make them available to peers. The HCP UI does not support exporting services. You must define the services you want to export and the peers you want to give access to, then write the configuration to your Consul deployment. + +If the peer you want to export services from is a HCP Consul Dedicated cluster, follow the steps to [export services with a configuration entry](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering#export-services-between-clusters). + +For more information about the fields you can configure when exporting services, refer to [exported services configuration entry](/consul/docs/connect/config-entries/exported-services) in the Consul documentation. + +## Authorize services with intentions and ACLs + +HCP uses a global "deny all" intention by default in order to keep service-to-service communication secure. After you export services between peers, you must configure service intentions on each cluster that authorize services to communicate with each other. + +If the peer you want to set service intentions on is a HCP Consul Dedicated cluster, follow the steps to [create service intentions with a configuration entry](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering#authorize-services-for-peers). + +For more information about the fields you can configure when defining service intentions, refer to [service intentions configuration entry](/consul/docs/connect/config-entries/service-intentions) in the Consul documentation. + +### Authorize service reads with ACLs + +If ACLs are enabled on a Consul cluster, sidecar proxies that access exported services as an upstream must have an ACL token that grants read access. + +Read access to all imported services is granted using either of the following rules associated with an ACL token: + +- `service:write` permissions for any service in the sidecar's partition. +- `service:read` and `node:read` for all services and nodes, respectively, in sidecar's namespace and partition. + +For Consul Enterprise, the permissions apply to all imported services in the service's partition. These permissions are satisfied when using a [service identity](/consul/docs/security/acl/acl-roles#service-identities). + +Refer to [Reading servers](/consul/docs/connect/config-entries/exported-services#reading-services) in the `exported-services` configuration entry documentation for example rules. + +For additional information about how to configure and use ACLs, refer to [ACLs system overview](/consul/docs/security/acl). + +## Next steps + +After establishing a cluster peering connection, you can further secure your deployment by [configuring an IP allowlist](/hcp/docs/consul/secure/ip-allowlist) to limit cluster access. HCP's cluster peering allowlist supports three IP address ranges on the allowlist at one time. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/consul/extend/cluster-peering/index.mdx b/content/hcp-docs/content/docs/consul/extend/cluster-peering/index.mdx new file mode 100644 index 0000000000..39b9441d57 --- /dev/null +++ b/content/hcp-docs/content/docs/consul/extend/cluster-peering/index.mdx @@ -0,0 +1,54 @@ +--- +page_title: Cluster peering overview +description: |- + Cluster peering with HCP Consul enables services deployed to clusters hosted in different cloud or runtime environments to communicate transparently. Learn about cluster peering on HCP and its supported cluster tiers, tier limitations, and technical constraints. +--- + +# Cluster peering overview + +@include 'alerts/consul-dedicated-eol.mdx' + +This topic provides an overview of cluster peering, a feature that connects two or more independent clusters so that services deployed to different partitions or datacenters can communicate. + +## Introduction + +If you have HCP Consul Dedicated clusters, you can create cluster peering connections within the same HVN, or between clusters hosted on two different HVNs or cloud providers. + +You can [use the HCP UI](/hcp/docs/consul/extend/cluster-peering/establish#create-a-cluster-peering-connection) to peer HCP Consul Dedicated clusters in the same HCP project. + +For more information about cluster peering, including general usage and Kubernetes-specific information, refer to the [cluster peering overview](/consul/docs/connect/cluster-peering) in the Consul documentation. + +## Multi-region and multi-cloud support + +The HCP cluster tier determines whether the Consul cluster can peer to clusters across multiple regions or clouds. + +For testing purposes, the development tier supports all cluster peering features. To use cluster peering in production environments, you must have an annual subscription through either an annual entitlement contract or Flex billing. + +For more information about how tiers differ in their support for multi-region and multi-cloud cluster peering, refer to [HCP Consul Dedicated tiers](/hcp/docs/consul/concepts/cluster-tiers#cluster-tiers). For information about the features available on each tier, refer to [HCP Consul features](/hcp/docs/consul#features). + +### Tier limitations + +When using cluster peering, be aware of the following limitations and technical constraints. + +- Cluster peering in production environments requires an account with either an annual entitlement contract or Flex billing. Refer to [billing models](/hcp/docs/hcp/admin/billing#billing-models) for more information. +- Premium tier clusters are only available with annual entitlement contracts and Flex billing subscriptions. +- Standard tier clusters in Azure are only available with annual entitlement contracts and Flex billing subscriptions. + +### Cluster peering and WAN federation + +[As described in the Consul documentation](/consul/docs/connect/cluster-peering#compared-with-wan-federation), cluster peering treats each datacenter as a separate cluster, while WAN federation connects multiple datacenters to make them function as if they were a single cluster. As a result, WAN federation requires a primary datacenter to maintain and replicate global states such as ACLs and configuration entries, but cluster peering does not. + +HCP does not support WAN federation in Azure environments. As a result, it is not possible to connect HCP Consul Dedicated clusters hosted on AWS and Azure with WAN federation. You also cannot federate HCP Consul Dedicated and self-managed Community and Enterprise clusters. However, HCP Consul Dedicated supports cluster peering between AWS and Azure clusters, as well as cluster peering between HCP Consul Dedicated and self-managed Community and Enterprise clusters. + + + +On AWS, HCP Consul Dedicated clusters do not support deployments that use cluster peering and WAN federation concurrently. + + + +## Constraints and considerations + +Cluster peering has the following constraints: + +- The clusters must run Consul v1.14.2 or later. +- When you create a cluster peering connection externally, the connection can still operate but it cannot be edited to make it appear on the HCP platform. diff --git a/content/hcp-docs/content/docs/consul/extend/disaster-recovery.mdx b/content/hcp-docs/content/docs/consul/extend/disaster-recovery.mdx new file mode 100644 index 0000000000..23c6b1b75f --- /dev/null +++ b/content/hcp-docs/content/docs/consul/extend/disaster-recovery.mdx @@ -0,0 +1,43 @@ +--- +page_title: Failover between regions with HCP Consul Dedicated +description: |- + This topic describes the disaster recovery setup of HCP Consul Dedicated clusters, and how to configure multiple HCP Consul Dedicated clusters to handle a total region failure. +--- + +# Failover between regions with HCP Consul Dedicated + +@include 'alerts/consul-dedicated-eol.mdx' + +This topic describes the disaster recovery setup for HCP Consul Dedicated clusters, including how to configure multiple HCP Consul Dedicated clusters to handle a total region failure by automatically routing service traffic to instances deployed in other regions. + +For more information about service failover strategies Consul supports, refer to [Failover overview](/consul/docs/connect/manage-traffic/failover) in the Consul documentation. + +## Introduction + +HCP Consul Dedicated clusters are designed to recover from almost all disasters automatically. However, we recommend that you set up a few resources of your own to minimize network disruption during a total region failure. Because HCP Consul Dedicated clusters are deployed to a user-specified region, you must use multiple clusters to architect against a region failure. + +To implement a minimal failover strategy, deploy two HCP Consul Dedicated clusters in separate regions, with separate instances of the same service deployed in each region. You should deploy Consul clusters in the same region as your services to satisfy both latency requirements and limit the blast radius of large-scale disasters. Then when one region fails, you can failover to services deployed in the other region. + +## Create a new HCP Consul Dedicated cluster in an alternative region + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization or project where you want to create the new cluster. +1. Click **Consul**. +1. From the Consul Overview, click **Create a Consul cluster**. +1. Use the workflow to create a new HCP Consul Dedicated cluster. Give your cluster a name, select a size, and configure accessibility. + +It usually takes between 5 and 10 minutes to create the new cluster. + +## Configure and deploy services to alternative region + +After the new HCP Consul Dedicated cluster is created, configure Consul and deploy all necessary services to the second region. You can deploy all of your services or just a critical subset, depending on your recovery time or recovery point objectives. + +You must also register these services to the Consul datacenter in order to route traffic from another region to them. + +## Setup a global failover policy + +During a total region outage, you are not able to communicate with the services and the Consul cluster in that region. Therefore, you must set up a global failover policy that can reroute network traffic to your alternative region and the services running there. This failover policy should be triggered by your own disaster recovery procedures. + +## Setup cluster peering between clusters + +While not required to recover from a regional outage, to provide additional resiliency against service outages, we recommend that you peer the two Consul clusters and [setup sameness groups](/consul/docs/connect/cluster-peering/usage/create-sameness-groups). diff --git a/content/hcp-docs/content/docs/consul/extend/federation.mdx b/content/hcp-docs/content/docs/consul/extend/federation.mdx new file mode 100644 index 0000000000..3f18c606b1 --- /dev/null +++ b/content/hcp-docs/content/docs/consul/extend/federation.mdx @@ -0,0 +1,48 @@ +--- +page_title: WAN federation with HCP Consul Dedicated +description: |- + This topic describes the process to federate HCP Consul Dedicated Consul clusters. When clusters are part of a WAN-federated network, a designated primary datacenter enables Consul to function as if the entire network was a single datacenter. +--- +# WAN federation with HCP Consul Dedicated + +@include 'alerts/consul-dedicated-eol.mdx' + +This topic describes how to create WAN federated connections between Consul clusters in HCP. WAN federation is a strategy for connecting multiple Consul clusters. A user-declared _primary datacenter_ replicates data between one or more _secondary datacenters_, allowing them to function as if they were a single datacenter. + +> **Tutorial:** Complete the [Federate Multiple HCP Consul Dedicated clusters](/consul/tutorials/cloud-production/consul-hcp-federation) tutorial for additional guidance on enabling HCP Consul Dedicated federation. + +## Introduction + +Consul datacenter federation enables operators to extend their Consul environments by connecting multiple HashiCorp Cloud Platform (HCP) Consul clusters together within a region. Federation lowers the operational overhead of connecting applications across distinct regions and improves security. Server-to-server connectivity is automatically handled by the HCP platform. + +Federation is a strategy for connecting datacenters and sharing services between them. However, HCP Consul Dedicated does not support federating clusters hosted on AWS with clusters hosted on Azure. Additionally, you cannot federate HCP Consul Dedicated and self-managed Community and Enterprise clusters. + +For multi-cluster connectivity, we recommend cluster peering instead of federation for most deployments. Consider your network's existing topology and needs to help you determine the most appropriate strategy your network. Refer to [network topologies](/hcp/docs/consul/concepts/network-topologies) for more information. + +## Constraints and considerations + +HCP Consul Dedicated provides a dedicated workflow for federating clusters. Only one cluster can be designated the primary datacenter, and you cannot add standalone clusters to an existing federated network. You must create a secondary datacenter through the dedicated workflow in order to federate it with another cluster. + +WAN federation with HCP Consul Dedicated is also subject to the following operational constraints: + +- Clusters within the HashiCorp Virtual Network (HVN) must have distinct network CIDR blocks in order to be federated. +- By default, six Consul clusters are allowed in an HCP organization. As a result, one primary cluster and five secondary clusters are supported. You can request a higher limit by filing a [support ticket](https://support.hashicorp.com/hc/en-us/requests/new). +- On AWS, cluster peering and federation cannot be used on the same cluster concurrently. + +Support for WAN federation across regions or cloud providers is determined by the cluster tier of the Consul servers being federated. For more information, refer to [cluster tiers](/hcp/docs/consul/concepts/cluster-tiers). + +## Create a WAN-federated network + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization or project where you want to create the federated network. +1. Click **Consul**. +1. From the Consul Overview, click the cluster ID you want to function as the primary datacenter. +1. Click **Create secondary**. +1. Use the workflow to create a new HCP Consul Dedicated cluster. Give your cluster a name, select a size, and configure accessibility. You cannot change a cluster's tier or version when adding a secondary datacenter. +1. Click **Create secondary** to begin the automated cluster creation process. + +It usually takes between 5 and 10 minutes to create the new cluster. When the process is complete, HCP displays the federated connection. + +## Delete a WAN-federated cluster + +When a HCP Consul Dedicated cluster is the primary datacenter in a WAN federated network, HCP does not allow you to delete the cluster if it is still federated with secondary datacenters. Delete all of the secondary datacenters in the federation first, then delete the primary datacenter. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/consul/index.mdx b/content/hcp-docs/content/docs/consul/index.mdx new file mode 100644 index 0000000000..fc8044db1f --- /dev/null +++ b/content/hcp-docs/content/docs/consul/index.mdx @@ -0,0 +1,146 @@ +--- +page_title: HCP Consul Dedicated overview +description: |- + This topic provides an overview of HCP Consul Dedicated clusters. Learn more about the overall architecture and user workflows. +--- + +# HCP Consul Dedicated overview + +@include 'alerts/consul-dedicated-eol.mdx' + +This topic provides an overview of HCP Consul Dedicated, the networking software as a service (SaaS) product available through the HashiCorp Cloud Platform (HCP). This service provides simplified workflows for common Consul tasks and the option to have HashiCorp set up and manage your Consul servers for you. + + + +> **Tutorial:** For a step-by-step guide to deploying a HCP Consul Dedicated cluster, complete the [getting started tutorial](/consul/tutorials/get-started-hcp). + +## What Consul services does HCP provide? + +![Diagram of HCP Consul Dedicated architecture](/img/docs/consul/hcp-consul-architecture-light.png#light-theme-only) +![Diagram of HCP Consul Dedicated architecture](/img/docs/consul/hcp-consul-architecture-dark.png#dark-theme-only) + +**HCP Consul Dedicated clusters:** Support service network deployments with Consul servers that we install, configure, and maintain on either AWS or Azure to ensure that your Consul clusters are always ready to connect your services. Refer to [cluster management](/hcp/docs/consul/concepts/cluster-management) for more information. + +HashiCorp previously offered HCP Consul Central, which was deprecated on November 6, 2024. [Learn more about HCP Consul Central](/hcp/docs/consul/concepts/consul-central). + +## Benefits + +Consul is a feature-rich and highly-configurable service networking solution. Configuring, deploying, and maintaining Consul infrastructure can seem daunting, especially for new users. HCP Consul Dedicated removes the need for Consul-specific expertise by handling the most complex operations. + +The benefits to using HCP Consul Dedicated include the following: + +- **Secure by default:** HCP Consul Dedicated servers are deployed with a secure policy that requires connections to have explicit permission. In addition to providing secure network connectivity for features such as datacenter federation and cluster peering, we proactively patch any [Common Vulnerabilities and Exposures (CVE)](https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=consul) to ensure your Consul servers are protected. +- **Fully-managed infrastructure:** You can expect production-ready servers with guaranteed [service level agreements (SLA)](https://portal.cloud.hashicorp.com/sla) that are monitored and maintained by HashiCorp site reliability engineers (SRE). We also provide backup and restore options, freeing you to focus on using Consul and its capabilities. +- **Push button deployments:** You can use the HCP interface to spin up Consul servers. The interface includes both a guided UI and Terraform automation options for quickly creating new clusters. + +## Features + +Feature availability for multi-region and multi-cloud networks is based on the tier you use for your Consul clusters. + +Cluster size and tier have no impact on the availability of Enterprise features. Most Consul Enterprise features are available to HCP Consul Dedicated clusters as soon as you create them. For more information, including Enterprise license configuration and retrieval, refer to [Consul Enterprise](/consul/docs/enterprise) in the Consul documentation. + +### Cluster size and tier + +When you create a HCP Consul Dedicated cluster, you are prompted to select a size and a tier for it. This choice determines the number of service instances the cluster can support and the level of multi-region and multi-cloud connectivity the cluster supports, respectively. You cannot change a cluster's tier after its creation. + +Refer to [cluster tiers](/hcp/docs/consul/concepts/cluster-tiers) for more information about the cluster sizes and connections each tier supports. For more information about tier compatability across networks, refer to [network topologies](/hcp/docs/consul/concepts/network-topologies). + +The cost of using HCP Consul Dedicated is calculated according to the number of clusters your organization deploys, with larger size clusters and higher level tiers incurring higher charges over time. Refer to [HCP Consul Dedicated Pricing](https://www.hashicorp.com/products/consul/pricing?ajs_aid=f6bd8009-4bf9-4ea1-a9eb-011771b6da41&product_intent=consul&utm_source=docs) for more information. + +### Consul server features + +The following table describes Consul server features and their availability by tier. + + + + + +| Feature | Description | Tier | +| -------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------- | +| [Access controls](/hcp/docs/hcp/create-account) |
Secure access to your HCP assets without impeding users.
| Development
Essentials
Standard
Premium | +| [Admin partitions](/consul/docs/enterprise/admin-partitions) |
Define administrative and communication boundaries between services that belong to separate stakeholders or are managed by separate teams.
| Development
Essentials
Standard
Premium | +| [Automated backups](/consul/docs/enterprise/backups) |
Run the snapshot agent in your environment to automatically take snapshots, rotate backups, and send backup files to storage sites.
| Development
Essentials
Standard
Premium | | +| [Cluster peering](/hcp/docs/consul/extend/cluster-peering) |
Connect two or more independent clusters so that services deployed to different partitions or datacenters can communicate. | Development
Essentials
Standard
Premium | +| [Federation](/hcp/docs/consul/extend/federation) (single-region) |
Connect multiple HCP Consul Dedicated clusters within a single region to extend your Consul environment.
| Development
Essentials
Standard
Premium | +| [Federation](/hcp/docs/consul/extend/federation) (multi-region) |
Connect multiple HCP Consul Dedicated clusters across multiple regions to extend your Consul environment.
| Development
Standard
Premium | +| HashiCorp management |
Create HCP Consul Dedicated clusters. You can use either HCP's interface or Terraform.
| Development
Essentials
Standard
Premium | +| [Managed upgrades](/hcp/docs/consul/upgrade) |
Update your HCP Consul Dedicated cluster to the next available major version. You can use either HCP's interface or Terraform.
| Development
Essentials
Standard
Premium | +| [Namespaces](/consul/docs/enterprise/namespaces) |
Separate services, Consul KV data, and other Consul data by team so that different teams in the same organization can share Consul datacenters.
| Development (testing only)
Essentials
Standard
Premium | +| Web UI |
Access Consul's web UI, which provides information about nodes, services, and other cluster components.
| Development
Essentials
Standard
Premium | + +On AWS, cluster peering and federation cannot be used on the same cluster concurrently. + +
+ + + +| Feature | Description | Tier | +| -------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------- | +| [Access controls](/hcp/docs/hcp/create-account) |
Secure access to your HCP assets without impeding users.
| Development
Essentials
Standard
Premium | +| [Admin partitions](/consul/docs/enterprise/admin-partitions) |
Define administrative and communication boundaries between services that belong to separate stakeholders or are managed by separate teams.
| Development
Essentials
Standard
Premium | +| [Automated backups](/consul/docs/enterprise/backups) |
Run the snapshot agent in your environment to automatically take snapshots, rotate backups, and send backup files to storage sites.
| Development
Essentials
Standard
Premium | +| [Cluster peering](/hcp/docs/consul/usage/cluster-peering) |
Connect two or more independent clusters so that services deployed to different partitions or datacenters can communicate. | Development
Essentials
Standard
Premium | +| HashiCorp management |
Create HCP Consul Dedicated clusters. You can use either HCP's interface or Terraform.
| Development
Essentials
Standard
Premium | +| [Managed upgrades](/hcp/docs/consul/usage/upgrades) |
Update your HCP Consul Dedicated cluster to the next available major version. You can use either HCP's interface or Terraform.
| Development
Essentials
Standard
Premium | +| [Namespaces](/consul/docs/enterprise/namespaces) |
Separate services, Consul KV data, and other Consul data by team so that different teams in the same organization can share Consul datacenters.
| Development
Essentials
Standard
Premium | +| Web UI |
Access Consul's web UI, which provides information about nodes, services, and other cluster components.
| Development
Essentials
Standard
Premium | + +
+
+ +### Consul client features + +The following table describes Consul client features and their availability by the tier the Consul server they are registered to. + + + + +| Feature | Description | Tier | +| ------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------- | +| Broad runtime support |
Deploy clients to a range of runtimes.
| Development
Essentials
Standard
Premium| +| [Consul API Gateway](/consul/docs/api-gateway) |
Consul API Gateway is a special gateway that allows external network clients to access applications and services running in a Consul datacenter.
| Development
Essentials
Standard
Premium| +| [Gateways](/consul/docs/connect/gateways) |
Ingress, terminating, and mesh gateways provide connectivity into, out of, and between Consul service meshes.
| Development
Essentials
Standard
Premium| +| [Health checks](/consul/docs/discovery/checks) |
Define checks to monitor the health of nodes in your network.
| Development
Essentials
Standard
Premium| +| [Kubernetes CRDs](/consul/docs/k8s/crds) |
Use Custom Resource Definitions (CRDs) to manage custom Consul configuration entries on Kubernetes.
| Development
Essentials
Standard
Premium| +| [Observability integrations](/consul/docs/connect/observability) |
Use L7 observability features in your service mesh.
| Development
Essentials
Standard
Premium| +| [Service discovery](/consul/docs/discovery/services) |
Register services and make them available to the network.
| Development
Essentials
Standard
Premium| +| [Service mesh](/consul/docs/connect) |
Provide secure service-to-service communication within and across infrastructure.
| Development
Essentials
Standard
Premium| + +
+ + + +| Feature | Description | Tier | +| ------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------- | +| Broad runtime support |
Deploy clients to a range of runtimes.
| Development
Essentials
Standard
Premium | +| [Consul API Gateway](/consul/docs/api-gateway) |
Consul API Gateway is a special gateway that allows external network clients to access applications and services running in a Consul datacenter.
| Development
Essentials
Standard
Premium | +| [Gateways](/consul/docs/connect/gateways) |
Ingress, terminating, and mesh gateways provide connectivity into, out of, and between Consul service meshes.
| Development
Essentials
Standard
Premium | +| [Health checks](/consul/docs/discovery/checks) |
Define checks to monitor the health of nodes in your network.
| Development
Essentials
Standard
Premium | +| [Kubernetes CRDs](/consul/docs/k8s/crds) |
Use Custom Resource Definitions (CRDs) to manage custom Consul configuration entries on Kubernetes.
| Development
Essentials
Standard
Premium | +| [Observability integrations](/consul/docs/connect/observability) |
Use L7 observability features in your service mesh.
| Development
Essentials
Standard
Premium | +| [Service discovery](/consul/docs/discovery/services) |
Register services and make them available to the network.
| Development
Essentials
Standard
Premium | +| [Service mesh](/consul/docs/connect) |
Provide secure service-to-service communication within and across infrastructure.
| Development
Essentials
Standard
Premium | + +
+
+ +## Workflows + +Using HCP Consul Dedicated consists of the following workflow phases: + +- [Deploy a HCP Consul Dedicated cluster](/hcp/docs/consul/hcp-managed). Create a HCP Consul Dedicated cluster, connect it to services deployed in your environment, and access the cluster through its CLI, API, or Consul UI. +- [Secure your network](/hcp/docs/consul/secure). Change a HCP Consul Dedicated cluster's accessibility or create service intentions to secure service mesh traffic. +- [Extend your network](/hcp/docs/consul/extend/cluster-peering). Create WAN-federated clusters or create cluster peering connections so that services deployed to different regions can communicate. Build multi-cloud deployments with cluster peering. +- [Monitor your network](/hcp/docs/consul/monitor). Access a HCP Consul Dedicated server's audit logs, platform logs, and server logs. +- [Upgrade your network](/hcp/docs/consul/upgrade). Check the version of Consul currently running on clusters and upgrade them using the HCP interface. Create and manage snapshots to restore clusters in the event of failure. + +## Constraints and considerations + +The following constraints may cause HCP Consul Dedicated to function inconsistently: + +- HVN peering connections with a VPC or VNet support RFC1918 IP addresses only. +- The Consul `monitor` command is not supported on HCP Consul Dedicated. +- You cannot use WAN federation and multiple admin partitions at the same time. +- You cannot use WAN federation and cluster peering at the same time. +- HCP Consul Dedicated does not support AWS Certificate Manager as a certificate authority for your service mesh. +- You may experience issues connecting HCP Consul Dedicated v1.11.0 clusters to Consul Enterprise clients versions 1.10.0-1.10.6. If you want to connect to HCP Consul Dedicated v1.11.2 or later, we recommend using Consul Enterprise v1.10.7 for your clients. This issue is only applicable to Consul Enterprise binaries. diff --git a/content/hcp-docs/content/docs/consul/migrate.mdx b/content/hcp-docs/content/docs/consul/migrate.mdx new file mode 100644 index 0000000000..f6e09ea81f --- /dev/null +++ b/content/hcp-docs/content/docs/consul/migrate.mdx @@ -0,0 +1,354 @@ +--- +page_title: Migrate Consul Dedicated cluster to self-managed Enterprise +description: |- + Learn how to migrate existing HCP Consul Dedicated clusters to self-managed Enterprise deployments. You can migrate clusters running on VMs or Kubernetes. +--- + +# Migrate Consul Dedicated cluster to self-managed Enterprise + +This page describes the process to migrate operations from an HCP Consul Dedicated cluster to a self-managed Consul Enterprise cluster. HashiCorp plans to retire HCP Consul Dedicated on November 12, 2025. + +## HCP Consul Dedicated End of Life + +On November 12, 2025, HashiCorp will end operations and support for HCP Consul Dedicated clusters. After this date, you will no longer be able to deploy new Dedicated clusters, nor will you be able to access, update, or manage existing Dedicated clusters. + +We recommend migrating HCP Consul Dedicated deployments to self-managed server clusters running Consul Enterprise. On virtual machines, this migration requires some downtime for the server cluster but enables continuity between existing configurations and operations. Downtime is not required on Kubernetes, although we suggest scheduling downtime to ensure the migration is successful. + +## Migration workflows + +The process to migrate a Dedicated cluster to a self-managed environment consists of the following steps, which change depending on whether your cluster runs on virtual machines (VMs) or Kubernetes. + +### VMs + +To migrate on VMs, complete the following steps: + +1. [Take a snapshot of the HCP Consul Dedicated cluster](#take-a-snapshot-of-the-hcp-consul-dedicated-cluster). +1. [Transfer the snapshot to a self-managed cluster](#transfer-the-snapshot-to-a-self-managed-cluster). +1. [Use the snapshot to restore the cluster in your self-managed environment](#use-the-snapshot-to-restore-the-cluster-in-your-self-managed-environment). +1. [Update the client configuration file to point to the new server](#update-the-client-configuration-file-to-point-to-the-new-server). +1. [Restart the client agent and verify that the migration was successful](#restart-the-client-agent-and-verify-that-the-migration-was-successful). +1. [Disconnect and decommission the HCP Consul Dedicated cluster and its supporting resources](#disconnect-supporting-resources-and-decommission-the-hcp-consul-dedicated-cluster). + +### Kubernetes + +To migrate on Kubernetes, complete the following steps: + +1. [Take a snapshot of the HCP Consul Dedicated cluster](#take-a-snapshot-of-the-hcp-consul-dedicated-cluster-1). +1. [Transfer the snapshot to a self-managed cluster](#transfer-the-snapshot-to-a-self-managed-cluster-1). +1. [Use the snapshot to restore the cluster in your self-managed environment](#use-the-snapshot-to-restore-the-cluster-in-your-self-managed-environment). +1. [Update the CoreDNS configuration](#update-the-coredns-configuration). +1. [Update the `values.yaml` file](#update-the-values-yaml-file). +1. [Upgrade the cluster](#upgrade-the-cluster). +1. [Redeploy workload applications](#redeploy-workload-applications). +1. [Switch the CoreDNS entry](#switch-the-coredns-entry). +1. [Verify that the migration was successful](#verify-that-the-migration-was-successful). +1. [Disconnect and decommission the HCP Consul Dedicated cluster and its supporting resources](#disconnect-and-decommission-the-hcp-consul-dedicated-cluster-and-its-supporting-resources). + +## Recommendations and best practices + +On VMs, the migration process requires a temporary outage that lasts from the time when you restore the snapshot on the self-managed cluster until the time when you restart client agents after updating their configuration. Downtime is not required on Kubernetes, although we suggest scheduling downtime to ensure the migration is successful. + +In addition, data written to the Dedicated server after the snapshot is created cannot be restored. + +To limit the duration of outages, we recommend using a dev environment to test the migration before fully migrating production workloads. The length of the outage depends on the number of clients, the self-managed environment, and the automated processes involved. + +Regardless of whether you use VMs or Kubernetes, we also recommend using [Consul maintenance mode](/consul/commands/maint) to schedule a period of inactivity to address unforeseen data loss or data sync issues that result from the migration. + +## Prerequisites + +The migration instructions on this page make the following assumptions about your existing infrastructure: + +- You already deployed an HCP Consul Dedicated server cluster and a self-managed server cluster with matching configurations. These configurations should include the following settings: + - Both clusters have 3 nodes. + - ACLs, TLS, and gossip encryption are enabled. +- You have command line access to both the Dedicated cluster and your self-managed cluster. +- You [generated an admin token for the Dedicated cluster](/hcp/docs/consul/dedicated/access#generate-admin-token) and exported it to the `CONSUL_HTTP_TOKEN` environment variable. Alternatively, add the `-token=` flag to CLI commands. +- The clusters have an existing VPC or peering connectivity connection. +- You already identified the client nodes affected by the migration. + +If you are migrating clusters on Kubernetes, refer to the [version compatibility matrix](/consul/docs/k8s/compatibility#compatibility-matrix) to ensure that you are using compatible versions of `consul` and `consul-k8s`. + +In addition, you must migrate to an Enterprise cluster, which requires an Enterprise license. Migrating to Community edition clusters is not possible. If you do not have access to a Consul Enterprise license, [file a support request to let us know](https://support.hashicorp.com/hc/en-us/requests/new). A member of the account team will reach out to assist you. + +## Migrate to self-managed on VMs + +To migrate to a self-managed Consul Enterprise cluster on VMs, [connect to the Dedicated cluster's current leader node](/hcp/docs/consul/dedicated/access) and then complete the following steps. + +### Take a snapshot of the HCP Consul Dedicated cluster + +A snapshot is a backup of your HCP Consul cluster’s state. Consul uses this snapshot to restore its previous state in the new self-managed environment. + +Run the following command to create a snapshot. + +```shell-session +$ consul snapshot save /home/backup/hcp-cluster.snapshot +Saved and verified snapshot to index 4749 +``` + +For more information on this command, refer to the [Consul CLI documentation](/consul/commands/snapshot/save). + +### Transfer the snapshot to a self-managed cluster + +Use a secure copy (SCP) command to move the snapshot file to the self-managed Consul cluster. + +```shell-session +$ scp /home/backup/hcp-cluster.snapshot @:/home/backup +``` + +### Use the snapshot to restore the cluster in your self-managed environment + +After you transfer the snapshot file to the self-managed node, restore the cluster’s state from the snapshot in your self-managed environment. + +Export the `CONSUL_HTTP_TOKEN` environment variable in your self-managed environment and then run the following command. + +```shell-session +$ consul snapshot restore /home/backup/hcp-cluster.snapshot +Restored snapshot +``` + +If you cannot use use environment variables, add the `-token=` flag to the command: + +```shell-session +$ consul snapshot restore /home/backup/hcp-cluster.snapshot -token=" +Restored snapshot +``` + +For more information on this command, refer to the [Consul CLI documentation](/consul/commands/snapshot/restore). + +### Update the client configuration file to point to the new server + +Modify the agent configuration on your Consul clients. You must update the following configuration values: + +- `retry_join` IP address +- TLS encryption +- ACL token + +You can use an existing certificate authority or create a new one in your self-managed cluster. For more information, refer to [Service mesh certificate authority overview in the Consul documentation](/consul/docs/connect/ca) + +The following example demonstrates a modified client configuration. + +```hcl +retry_join = [""] + +tls { + defaults { + auto_encrypt { + allow_tls =true + tls = true + } + verify_incoming = true + verify_outgoing = true + } +} + +acl { + enabled = true + default_policy = "deny" + enable_token_persistence = true + tokens { + agent = "" + } +} +``` + +For more information about configuring these fields, refer to the [agent configuration reference in the Consul documentation](/consul/docs/agent/config/config-files). + +### Restart the client agent and verify that the migration was successful + +Restart the client to apply the updated configuration and reconnect it to the new cluster. + +```shell-session +$ sudo systemctl restart consul +``` + +After you update and restart all of the client agents, check the catalog to ensure that clients migrated successfully. You can check the Consul UI or run the following CLI command. + +```shell-session +$ consul members +``` + +Run `consul members` on the Dedicated cluster as well. Ensure that all clients appear as `inactive` or `left`. + +### Disconnect supporting resources and decommission the HCP Consul Dedicated cluster + +After you confirm that your client agents successfully connected to the self-managed cluster, delete VPC peering connections and any other unused resources, such as HVNs. If you use other HCP services, ensure that these resources are not currently in use. After you delete a peering connection or an HVN, it cannot be used by any HCP product. + +Then delete the HCP Consul Dedicated cluster. For more information, refer to [Delete a HCP Consul Dedicated cluster](/hcp/docs/consul/dedicated/delete). + +## Migrate to self-managed on Kubernetes + +To migrate to a self-managed Consul Enterprise cluster on Kubernetes, [connect to the Dedicated cluster's current leader node](/hcp/docs/consul/dedicated/access) and then complete the following steps. + +### Take a snapshot of the HCP Consul Dedicated cluster + +A snapshot is a backup of your HCP Consul cluster’s state. Consul uses this snapshot to restore its previous state in the new self-managed environment. + +[Connect to the HCP Consul Dedicated cluster](/hcp/docs/consul/dedicated/access) and then run the following command to create a snapshot. + +```shell-session +$ consul snapshot save /home/backup/hcp-cluster.snapshot +Saved and verified snapshot to index 4749 +``` + +For more information on this command, refer to the [Consul CLI documentation](/consul/commands/snapshot/save). + +### Transfer the snapshot to a self-managed cluster + +Use a secure copy (SCP) command to move the snapshot file to the self-managed Consul cluster. + +```shell-session +$ scp /home/backup/hcp-cluster.snapshot @:/home/backup +``` + +### Use the snapshot to restore the cluster in your self-managed environment + +After you transfer the snapshot file to the self-managed node, use the `kubectl exec` command to restore the cluster’s state in your self-managed Kubernetes environment. + +```shell-session +$ kubectl exec -c consul-server-0 -- consul snapshot restore /home/backup/hcp-cluster.snapshot +Restored snapshot +``` + +For more information on this command, refer to the [Consul CLI documentation](/consul/commands/snapshot/restore). + +### Update the CoreDNS configuration + +Update the CoreDNS configuration on your Kubernetes cluster to point to the Dedicated cluster's IP address. Make sure the configured hostname resolves correctly to cluster’s IP from inside a deployed pod. + + + +```yaml +Corefile: |- + .:53 { + errors + health { + lameduck 5s } + ready + kubernetes cluster.local in-addr.arpa ip6.arpa { + pods insecure + fallthrough in-addr.arpa ip6.arpa + ttl 30 + } + hosts { + 35.91.49.134 server.hcp-managed.consul + fallthrough + } + prometheus 0.0.0.0:9153 + forward . 8.8.8.8 8.8.4.4 /etc/resolv.conf + cache 30 + loop + reload + loadbalance + } +``` + + + +If there are issues when you attempt to resolve the hostname, check if the nameserver resolves to the `CLUSTER-IP` inside the pod. Run the following command to return the `CLUSTER-IP`. + +```shell-session + # k -n kube-system get svc + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + coredns ClusterIP 10.100.224.88 53/UDP,53/TCP 4h24m +``` + +### Update the `values.yaml` file + +Update the Helm configuration or `values.yaml` file for your self-managed cluster. You should update the following fields: + +- Update the server host value. Use the host name you added when you updated the CoreDNS configuration. +- Create a Kubernetes secret in the `consul` namespace with a new CA file created by adding the contents of all of the following CA files. Add the CA file contents of the new self managed server at the end. + - [https://letsencrypt.org/certs/isrg-root-x1-cross-signed.pem](https://letsencrypt.org/certs/isrg-root-x1-cross-signed.pem) + - [https://letsencrypt.org/certs/isrg-root-x2-cross-signed.pem](https://letsencrypt.org/certs/isrg-root-x2-cross-signed.pem) + - [https://letsencrypt.org/certs/2024/e5-cross.pem](https://letsencrypt.org/certs/2024/e5-cross.pem) + - [https://letsencrypt.org/certs/2024/e6-cross.pem](https://letsencrypt.org/certs/2024/e6-cross.pem) + - [https://letsencrypt.org/certs/2024/r10.pem](https://letsencrypt.org/certs/2024/r10.pem) + - [https://letsencrypt.org/certs/2024/r11.pem](https://letsencrypt.org/certs/2024/r11.pem) +- Update the `tlsServerName` field to the appropriate value. It is usually the hostname of the +managed cluster. If the value is not known, TLS verification fails when you apply this configuration and the error log lists possible values. +- Set `useSystemRoots` to `false` to use the new CA certs. + +For more information about configuring these fields, refer to the [Consul on Kubernetes Helm chart reference](/consul/docs/k8s/helm). + +### Upgrade the cluster + +After you update the `values.yaml` file, run the following command to update the self-managed Kubernetes cluster. + +```shell-session +$ consul-k8s upgrade -config-file=values.yaml +``` + +This command redeploys the Consul pods with the updated configurations. Although the CoreDNS installation still points to the Dedicated cluster, the pods have access to the new CA file. + +### Redeploy workload applications + +Redeploy all the workload applications so that the `init` containers run again and fetch the new CA file. After you redeploy the applications, run a `kubectl describe pod` command on any workload pod and verify the output resembles the following example. + + + +```shell-session +$ kubectl describe pod -l name="product-api-8cf8c8ccc-kvkk8" +Environment: + POD_NAME: product-api-8cf8c8ccc-kvkk8 (v1:metadata.name) + POD_NAMESPACE: default (v1:metadata.namespace) + NODE_NAME: (v1:spec.nodeName) + CONSUL_ADDRESSES: server.consul.one + CONSUL_GRPC_PORT: 8502 + CONSUL_HTTP_PORT: 443 + CONSUL_API_TIMEOUT: 5m0s + CONSUL_NODE_NAME: $(NODE_NAME)-virtual + CONSUL_USE_TLS: true + CONSUL_CACERT_PEM: -----BEGIN CERTIFICATE-----\r +MIIFYDCCBEigAwIBAgIQQAF3ITfU6UK47naqPGQKtzANBgkqhkiG9w0BAQsFADA/\r +MSQwIgYDVQQKExtEaWdpdGFsIFNpZ25hdHVyZSBUcnVzdCBDby4xFzAVBgNVBAMT\r +DkRTVCBSb290IENBIFgzMB4XDTIxMDEyMDE5MTQwM1oXDTI0MDkzMDE4MTQwM1ow\r +TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh\r +cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwggIiMA0GCSqGSIb3DQEB\r +AQUAA4ICDwAwggIKAoICAQCt6CRz9BQ385ueK1coHIe+3LffOJCMbjzmV6B493XC +``` + + + +### Switch the CoreDNS entry + +Update the CoreDNS configuration with the self-managed server's IP address. + +If the `tlsServerName` of the self-managed cluster is different than the `tlsServerName` on the Dedicated cluster, you must update the field and re-run the `consul-k8s upgrade` command. For self-managed clusters, the `tlsServerName` usually take form of `server..consul`. + +### Verify that the migration was successful + +After you update the CoreDNS entry, check the Consul catalog to ensure that the migration was successful. You can check the Consul UI or run the following CLI command. + +```shell-session +$ kubectl exec -c consul-server-0 -- consul members +``` + +Run `consul members` on the Dedicated cluster as well. Ensure that all service nodes appear as `inactive` or `left`. + +### Disconnect and decommission the HCP Consul Dedicated cluster and its supporting resources + +After you confirm that your services successfully connected to the self-managed cluster, delete VPC peering connections and any other unused resources, such as HVNs. If you use other HCP services, ensure that these resources are not currently in use. After you delete a peering connection or an HVN, it cannot be used by any HCP product. + +Then delete the HCP Consul Dedicated cluster. For more information, refer to [Delete a HCP Consul Dedicated cluster](/hcp/docs/consul/dedicated/delete). + +## Troubleshooting + +You might encounter errors when migrating from an HCP Consul Dedicated cluster to a self-managed Consul Enterprise cluster. + +### Troubleshoot on VMs + +If you encounter a `403 Permission Denied` error when you attempt to generate a new ACL bootstrap token, or if you misplace the bootstrap token, you can update the Raft index to reset the ACL system. Use the Raft index number included in the error output to write the reset index into the bootstrap reset file. You must run this command on the Leader node. + +The following example uses `13` as its Raft index: + +```shell-session +$ echo 13 >> consul.d/acl-bootstrap-reset +``` + +### Troubleshoot on Kubernetes + +If you encounter issues resolving the hostname, check if the nameserver does not match the `CLUSTER-IP`. One possible issue is that the `ClusterDNS` field points to an IP in the kubelet configuration that differs from the Kubernetes worker nodes. You should change the kubelet configuration to use the `CLUSTER-IP` and then restart the kubelet process on all nodes. + +## Support + +If have questions or need additional help when migrating to a self-managed Consul Enterprise cluster, [submit a request to our support team](https://support.hashicorp.com/hc/en-us/requests/new). \ No newline at end of file diff --git a/content/hcp-docs/content/docs/consul/monitor/audit-logs.mdx b/content/hcp-docs/content/docs/consul/monitor/audit-logs.mdx new file mode 100644 index 0000000000..02c5db40ce --- /dev/null +++ b/content/hcp-docs/content/docs/consul/monitor/audit-logs.mdx @@ -0,0 +1,42 @@ +--- +page_title: Audit Logs +description: |- + This topic describes how to access audit logs in HCP Consul Dedicated. Audit logs record access to the Consul server through the HTTP API through the token used to make the API call. +--- + +# Audit logs + +@include 'alerts/consul-dedicated-eol.mdx' + +This topic describes how to monitor your network using HCP Consul Dedicated's audit logging functionality. Audit logs record data about requests made to the Consul server's HTTP API. + +For more information about audit logging, including an example of an audit log, refer to [Audit Logging](/consul/docs/enterprise/audit-logging) in the Consul documentation. + +## Introduction + +Audit logs can provide greater insight into Consul access and usage patterns for the security and compliance teams in your HCP organization. They capture information about Consul-authenticated events that occur through the HTTP API. This information includes a timestamp, the operation method, the endpoint, and the assessor ID associated with the token used to make the API call. + +You can obtain a token using the Consul CLI, HTTP API, or Consul UI. These tokens correlate with the assessor ID in the audit log. Refer to the [ACL tokens documentation](/consul/docs/security/acl/acl-tokens) to learn about assessor IDs and other ACL token metadata. + +Audit logging is enabled by default on Essentials, Standard, and Premium [cluster tiers](/hcp/docs/consul/concepts/cluster-tiers). + +## Retrieve audit logs + +HCP Consul Dedicated keeps a cluster's audit logs in an encrypted storage environemnt in the same region as the cluster. You can retrieve audit logs in 24-hour increments from [the HCP Portal](https://portal.cloud.hashicorp.com/). + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization or project where you created the cluster. +1. Click **Consul**. +1. From the Consul Overview, click the cluster ID of the servers whose audit logs you want to access. +1. Click **Audit logs** in the sidebar menu. +1. Specify a range of dates and times. Each period of up to 24 hours specified in the range downloadeds as a separate archive. +1. Click **Request download**. HCP begins preparing the audit logs. You can navigate away from the audit log screen during this process. +1. When the download request is ready, the status appears as `Available`. Click the download icon next to each archive. + +Under **Latest download requests**, links to download audit log archives are available for 24 hours after their creation. + +## Log retention + +Audit logs are stored within the platform for a minimum of one year. HCP began archiving audit logs in February 2022. + +Audit logs are still available after the cluster associated with the log was deleted. Contact [HashiCorp Support](/hcp/docs/hcp/admin/support) if you need access to logs from deleted clusters. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/consul/monitor/metrics.mdx b/content/hcp-docs/content/docs/consul/monitor/metrics.mdx new file mode 100644 index 0000000000..dcd6eecce9 --- /dev/null +++ b/content/hcp-docs/content/docs/consul/monitor/metrics.mdx @@ -0,0 +1,126 @@ +--- +page_title: Metrics from HCP Consul Dedicated clusters +description: |- + Learn how to scrape a HCP Consul Dedicated cluster's server metrics using Prometheus or OpenTelemetry Collector. +--- + +# Metrics from HCP Consul Dedicated clusters + +@include 'alerts/consul-dedicated-eol.mdx' + +This page describes how to scrape [Consul server metrics](/consul/docs/agent/monitor/telemetry) from HCP Consul Dedicated clusters. + +HashiCorp [ensures the availability of HCP Consul Dedicated clusters](/hcp/docs/consul/concepts/cluster-management). You can set up a custom metrics collector to track cluster usage. + +## Prerequisites + +To access server metrics, you need a telemetry agent capable of both resolving a DNS record to IP addresses and scraping Prometheus-format metrics. This page provides example configurations for [Prometheus](https://prometheus.io/download/) and [OpenTelemetry Collector using Docker](https://opentelemetry.io/docs/collector/getting-started/#docker). + +Accessing server metrics requires the cluster's address and an ACL token with a minimum of [`agent:read` permission](/consul/docs/security/acl/acl-rules). To get the cluster's address and create an ACL token for HCP Consul Dedicated, refer to [Access HCP Consul Dedicated clusters](/hcp/docs/consul/dedicated/access). + +## Scrape server metrics + +Configure your telemetry collector to scrape metrics from the cluster's `/v1/agent/metrics` endpoint and then start the collector. The following examples demonstrate configurations for Prometheus and OpenTelemetry Collector. + +### Prometheus + +To use a Prometheus metrics collector, configure it to do the following: + +1. Resolve the cluster's [DNS address to IP addresses](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config). +1. Scrape metrics from each server's [`/v1/agent/metrics` endpoint](/consul/api-docs/agent#view-metrics). +1. Call the agent endpoint with a Bearer token authorized to read metrics. + +In the following example configuration, these fields are specified so that the collector scrapes agent metrics every 60 minutes by running a job named `hcp-consul-cluster`. + + + +```yaml +--- +global: + scrape_interval: "60s" + +scrape_configs: + - job_name: "hcp-consul-cluster" + + # Resolve the IP addresses of servers. + scheme: "https" + dns_sd_configs: + - names: + - "" + type: "A" + port: 443 + + # Set the Consul token as the Bearer token. + authorization: + credentials: "" + + # Query the agent metrics endpoint. + metrics_path: "/v1/agent/metrics" + + # Disable TLS verification. + tls_config: + insecure_skip_verify: true +``` + + + +To start the metrics collector, run the following command: + +```shell-session +$ prometheus --config.file=scrape.yaml +``` + +### OpenTelemetry Collector + +You can use the [Prometheus Receiver in the OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/prometheusreceiver) to scrape metrics from an HCP Consul Dedicated cluster. + +The following example configures a metrics pipeline with a [Processor](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md) and [Exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlpexporter). The OTLP Exporter requires an OTLP gRPC endpoint. Observability platforms such as [New Relic](https://docs.newrelic.com/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/get-started/opentelemetry-set-up-your-app/) and [Honeycomb](https://docs.honeycomb.io/getting-data-in/opentelemetry-overview/#using-the-honeycomb-opentelemetry-endpoint) have public endpoints for metrics ingestion. Others platforms such as Datadog have [their own Exporter](https://docs.datadoghq.com/opentelemetry/otel_collector_datadog_exporter). + + + +```yaml +--- +receivers: + prometheus: + config: + global: + scrape_interval: "60s" + scrape_configs: + - job_name: "hcp-consul-cluster" + scheme: "https" + dns_sd_configs: + - names: + - "" + type: "A" + port: 443 + authorization: + credentials: "" + metrics_path: "/v1/agent/metrics" + tls_config: + insecure_skip_verify: true + +processors: + batch: + send_batch_max_size: 1000 + send_batch_size: 100 + timeout: "60s" + +exporters: + otlp: + endpoint: "" + +service: + pipelines: + metrics: + receivers: [prometheus] + processors: [batch] + exporters: [otlp] +``` + + + +To [start the Collector in a Docker container](https://opentelemetry.io/docs/collector/getting-started/#docker), run the following command: + +```shell-session +$ docker run -v $(pwd)/config.yaml:/etc/otelcol-contrib/config.yaml otel/opentelemetry-collector-contrib:0.86.0 +``` diff --git a/content/hcp-docs/content/docs/consul/monitor/server-logs.mdx b/content/hcp-docs/content/docs/consul/monitor/server-logs.mdx new file mode 100644 index 0000000000..789d043243 --- /dev/null +++ b/content/hcp-docs/content/docs/consul/monitor/server-logs.mdx @@ -0,0 +1,67 @@ +--- +page_title: Server Logs +description: |- + Learn how to use the `consul monitor` CLI command to access server logs for an HCP Consul Dedicated cluster. +--- + +# Server logs + +@include 'alerts/consul-dedicated-eol.mdx' + +This page describes how to access server logs for HCP Consul Dedicated clusters. After you [access an HCP Consul Dedicated cluster](/hcp/docs/consul/dedicated/access), you can use the CLI to view the logs for individual clusters. + +For information on accessing a server's audit logs to monitor API calls, refer to [audit logs](/hcp/docs/consul/monitor/audit-logs). + +## Prerequisites + +Accessing server logs requires the Consul CLI. If you do not have access to the CLI on your workstation, [download and install Consul](/consul/downloads). + +Accessing server logs also requires the server's address and an admin token for the HCP Consul Dedicated cluster. For instructions on accessing this information, refer to [Access HCP Consul Dedicated clusters](/hcp/docs/consul/dedicated/access). + +## View server logs + +The process to view an individual server's logs consists of the following steps: + +1. Use the cluster DNS address to resolve the IP addresses for the individual Consul servers. +1. For each server, run `consul monitor` in a separate terminal. + +### Resolve server addresses + +Run `nslookup` or a similar tool to resolve the DNS address to individual server IPs. The lookup succeeds for both private and public clusters, but the IPs for private clusters are not routable from your workstation unless you are within your corporate VPN or on a jump server. + +Update the following command to include the HCP Consul Dedicated server's DNS address. + +```shell-session +$ nslookup consul-cluster-name.consul.alphanumeric-id.aws.hashicorp.cloud + +Server: 192.168.1.254 +Address: 192.168.1.254#53 + +Non-authoritative answer: +Name: consul-cluster-name.consul.alphanumeric-id.aws.hashicorp.cloud +Address: 172.25.22.78 +Name: consul-cluster-name.consul.alphanumeric-id.aws.hashicorp.cloud +Address: 172.25.27.214 +Name: consul-cluster-name.consul.alphanumeric-id.aws.hashicorp.cloud +Address: 172.25.17.212 +``` + +### Access server logs + +Turn off SSL verification and run `consul monitor` on an individual server's IP address to output server logs to the terminal. You must disable SSL verification because the certificate for the servers is only valid for the domain name, not the individual IP address. In most cases you can use the domain name to interact with the server, but in this specific case you must make requests to each individual IP. + +Update the following command to include your server address and a valid admin token: + +```shell-session +$ CONSUL_HTTP_SSL_VERIFY=false consul monitor \ + -http-addr https://172.25.22.78 \ + -token +``` + +You can also add the `-log-level` flag to specify a log level. The default log level is `info`. Available log levels are `trace`, `debug`, `info`, `warn`, and `error`. + +It may take a minute or two for the logs to appear in the terminal. A known issue causing this delay was fixed in the following Consul versions: + +- v1.15.2 +- v1.14.6 +- v1.13.7 \ No newline at end of file diff --git a/content/hcp-docs/content/docs/consul/secure/index.mdx b/content/hcp-docs/content/docs/consul/secure/index.mdx new file mode 100644 index 0000000000..e7d8ff03d3 --- /dev/null +++ b/content/hcp-docs/content/docs/consul/secure/index.mdx @@ -0,0 +1,89 @@ +--- +page_title: Consul security overview +description: |- + Consul has a serveral built-in security features that are used with HCP Consul Dedicated. Learn what makes them secure and the role you play in network security. +--- + +# HCP Consul security overview + +@include 'alerts/consul-dedicated-eol.mdx' + +This topic provides an overview of network security considerations when using HCP Consul Dedicated. By default, Consul deployments communicate securely across all protocols and user interactions. To enforce security, Consul uses gossip encryption, transport layer security (TLS) encryption, and access control lists (ACLs). + +To learn more about what makes Consul secure and potential security threats, refer to [Core security model](/consul/docs/security) in the Consul documentation. + +## Introduction + +Consul has several mechanisms to ensure network security that function regardless of whether you use HCP Consul Dedicated or self-managed Community and Enterprise clusters. The following mechanisms ensure that communication within a Consul cluster's service mesh can only take place between valid hosts: + +- Gossip encryption +- Transport Later Security (TLS) encryption +- Access Control List (ACL) system + +The HashiCorp Cloud Platform (HCP) provides additional features to improve network security. The following features are available: + +- HashiCorp Virtual Network (HVN) peering +- Private cluster accessibility +- IP allowlist + +HCP Consul Dedicated does not automate the process to configure service intentions. After registering services with the Consul servers, you should create service intentions to ensure that only authorized services can communicate within the service mesh. Refer to [service intentions overview](/consul/docs/connect/intentions) for more information. + +## Gossip encryption + +Consul uses a gossip protocol to manage membership and broadcast messages to the cluster. Intra-cluster communication is secured with a key that Consul agents use for authentication over the protocol. + +When using HCP Consul Dedicated clusters, this key is generated for you and included in the client configuration you use to [deploy clients](/hcp/docs/consul/dedicated/clients). Keep this configuration secure to avoid unwanted access to the gossip pool. + +To learn more, refer to [Gossip protocol](/consul/docs/architecture/gossip) in the Consul documentation. + +## Transport Layer Security (TLS) encryption + +Consul uses TLS encryption to secure communication between agents. A built-in certificate authority allows you to create, distribute, and rotate X.509 certificates so that agents and proxies in the service mesh send only verified requests to services. + +HCP Consul Dedicated automatically creates, manages, and rotates TLS certificates. Consul stores certificates in a secure HCP Consul Dedicated Vault environment. Certificates automatically expire after one year. + +## Access Control Lists (ACLs) + +Consul uses an ACL system to secure access to cluster data during user and agent requests. The ACL system consists of the following configurable components: + +- [ACL token](/consul/docs/security/acl/tokens) +- [ACL policy](/consul/docs/security/acl/acl-policies) +- [ACL role](/consul/docs/security/acl/acl-roles) + +The ACL system is always enabled when using HCP Consul Dedicated. + +HCP also creates an ACL token every time you generate an admin token to [access a HCP Consul Dedicated cluster](/hcp/docs/consul/dedicated/access). + +The tokens that HCP Consul Dedicated creates and has access to are stored in a secure HCP Consul Dedicated Vault environment. + +You can create and manage ACL tokens using the [`consul acl` CLI command](/consul/commands/acl), the [`/acl` API endpoint](/consul/api-docs/acl), or a cluster's Consul UI. You can access the Consul UI for HCP Consul Dedicated clusters directly through the HCP platform. + +Be sure to configure your HCP organization's user roles to ensure that only authorized users have access to clusters through HCP. Refer to [user roles and ACL policies](/hcp/docs/consul/self-managed#user-roles-and-acl-policies) for more information about the ACL policies linked to each user role. For more information about configuring roles in your organization, refer to [Users](/hcp/docs/hcp/admin/users). + +## HashiCorp Virtual Network (HVN) peering + +An HVN is an essential networking component when using HCP Consul Dedicated clusters. With a peering connection or a transit gateway attachment between an HVN and a VPC or VNet, servers in HCP Consul Dedicated environments can establish secure connections with services hosted in user-managed environments. + +You can connect an HVN to either an AWS or an Azure environment. You cannot deploy a product across multiple HVNs or change HVNs after you create them. To create multi-cloud deployments, [establish a cluster peering connection](/hcp/docs/consul/extend/cluster-peering/establish) between two HCP Consul Dedicated clusters with separate HVNs peered to separate cloud environments. These clusters must also have a [compatible network topology](/hcp/docs/consul/concepts/network-topologies) in order to establish a connection between them. + +Refer to [HashiCorp Virtual Network](/hcp/docs/hcp/network) for more information about using HVNs. + +## Private cluster accessibility + +When you create an HCP Consul Dedicated cluster, you have the option to choose between *private* and *public* accessibility. + +Private clusters do not expose their endpoint to the public internet. Only connected networks can communicate with the cluster through HTTPS or gRPC. Private clusters are more secure than public clusters. We recommend using private clusters in production environments. + +Public clusters have an HTTP endpoint that can be accessed by any connection outside your network. We recommend only using public clusters for development, testing, and debugging purposes. + +## IP allowlist + +HCP Consul Dedicated clusters can use an IP allowlist to restrict communication to a set of IPV4 address ranges. Address outside the ranges in the list are denied access to the cluster's network. This configuration provides an additional layer of security for securing Consul deployments with cluster peering connections. Refer to [secure cluster access with IP allowlist](/hcp/docs/consul/secure/ip-allowlist) for more information. + +## Service intentions + +Service intentions are a mechanism for securing L4 and L7 traffic in a service mesh with identity-based enforcement. When you create a service intention, Envoy proxies check incoming requests against a set of user-defined rules, then allow or deny access accordingly. + +HCP Consul Dedicated does not configure service intentions for clusters + +For more information, refer to [Service intentions overview](/consul/docs/connect/intentions) in the Consul documentation. For specifications and example configuration entries, refer to [Service intentions configuration entry reference](/consul/docs/connect/config-entries/service-intentions). \ No newline at end of file diff --git a/content/hcp-docs/content/docs/consul/secure/ip-allowlist.mdx b/content/hcp-docs/content/docs/consul/secure/ip-allowlist.mdx new file mode 100644 index 0000000000..267bc351c4 --- /dev/null +++ b/content/hcp-docs/content/docs/consul/secure/ip-allowlist.mdx @@ -0,0 +1,33 @@ +--- +page_title: Secure cluster access with IP allowlist +description: |- + IP allowlists limit access to a HCP Consul Dedicated cluster to a set of three IPV4 address ranges in CIDR notation. Learn how to use an IP allowlist to add additional security to cluster peering connections between HCP Consul Dedicated and self-managed Community and Enterprise clusters. +--- + +# Secure cluster access with IP allowlist + +@include 'alerts/consul-dedicated-eol.mdx' + +HCP Consul Dedicated clusters can use an IP allowlist to restrict communication to a set of IPV4 address ranges. Addresses outside the ranges in the list are denied access to the cluster's network. This configuration provides an additional layer of security for securing Consul deployments with cluster peering connections. + +## Background + +HCP Consul Dedicated clusters are hosted in a HCP Consul Dedicated environment, and they support services hosted in a user-managed environment. In this deployment model, a [HashiCorp Virtual Network (HVN) peering connection](/hcp/docs/hcp/network) ensures that internal communications between environments remain secure. However, self-managed Community and Enterprise clusters do not require HVN peerings, as all network components are hosted in a single user-managed environment. + +When using cluster peering connections between HCP Consul Dedicated and self-managed Community and Enterprise clusters, configuring HCP Consul Dedicated clusters to deny requests that come from an IP address that is not part of your network can add additional security to cross-cluster communications. + +You can enable and configure an IP allowlist when creating a HCP Consul Dedicated cluster. You can also enable it later, disable it, or change the range of allowed addresses by editing an existing cluster. + +## Use IP allowlist + +To add an IP address to an existing cluster's allowlist, complete the following steps: + +1. From the Consul Overview, next to the cluster you want to secure access to, click **More** (three horizontal dots). Then, click **Edit cluster**. +1. Under "Cluster accessibility", turn on **Allow select IPs only**. +1. Enter the IP address range that is allowed to access the cluster. The address must be in CIDR notation. +1. Optionally, enter a description to help you identify the source. +1. Click **Apply changes** to save changes to the IP allowlist. + +You can add IP addresses to the allowlist one at a time, or you can click **Add another IP address** to add up to three addresses. + +HCP Consul Dedicated's allowlist supports three IP address ranges on the allowlist at one time. Click the trash icon to delete an address and its description. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/consul/upgrade/index.mdx b/content/hcp-docs/content/docs/consul/upgrade/index.mdx new file mode 100644 index 0000000000..183e9b62d8 --- /dev/null +++ b/content/hcp-docs/content/docs/consul/upgrade/index.mdx @@ -0,0 +1,49 @@ +--- +page_title: Upgrade clusters +description: |- + HCP provides a dedicated workflow to upgrade the version of HCP Consul Dedicated clusters. Learn how to upgrade the version of HCP Consul Dedicated clusters. +--- + +# Upgrade clusters + +@include 'alerts/consul-dedicated-eol.mdx' + +HCP provides a dedicated workflow to upgrade the version of an HCP Consul Dedicated cluster. + +For more general information about the upgrade process, refer to [Upgrading Consul](/consul/docs/upgrading) in the Consul documentation. + +## Introduction + +As described in the Consul documentation, the [general upgrade process for Consul servers](/consul/docs/upgrading/instructions/general-process) consists of the following steps: + +1. Create a snapshot of your cluster. If an error occurs during the upgrade process, this snapshot can be used to recover the servers' previous working state. +1. Check the current state of the Raft quorum to find the cluster's leader. Upgrade the binary on the followers first, then upgrade the leader. + +You can use HCP to simplify the process of managing Consul versions for HCP Consul Dedicated clusters. When a new version of Consul in released, a badge appears in the Consul overview to inform you if your cluster is out of date or out of support. You can upgrade the cluster's version at a time of your choosing, according to your network's needs. + +You can upgrade as new versions of Consul become available or as your networking needs evolve. HashiCorp may upgrade the base host image or version used by your Consul clusters to fix some common vulnerabilities and exposures (CVE). + +## Cluster versions + +To safely and securely manage your Consul clusters, HCP Consul Dedicated follows HashiCorp’s [Support Period Policy](https://support.hashicorp.com/hc/en-us/articles/360021185113-Support-Period-and-End-of-Life-EOL-Policy). Be aware of the following aspects of the policy: + +- HCP Consul Dedicated offers `n-2` version support for bug fixes and CVEs via new minor releases, where `n` is the latest major release of Consul. You can identify a major release by a change in the first digit (`X`) or the second digit (`Y`) of the Consul version nomenclature (`X.Y.Z`). For example, if the latest release is `1.16.*`, fixes will be available for versions `1.16.*`, `1.15.*`, and `1.14.*`. +- HashiCorp updates HCP Consul Dedicated Consul clusters with CVE patches for clusters that fall within the `n-2` version. +- HashiCorp recommends users keep Consul clusters within two (2) major releases from the latest major release. Doing so ensures that bug fixes and security patches are successfully applied to HCP Consul Dedicated clusters. +- HashiCorp supports [Generally Available (GA) Consul releases](https://github.com/hashicorp/consul/releases) for up to two years from their release date. In some cases, HashiCorp may request that users upgrade to newer releases in order to resolve support requests. + +## Upgrade HCP Consul Dedicated cluster's Consul version + + + +Before you upgrade a Consul, we recommend [creating a snapshot of the cluster](/hcp/docs/consul/upgrade/snapshots). If an error occurs or your service network does not function as expected, use the snapshot to restore your cluster to its last working state. + + + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization or project where you created the cluster. +1. Click **Consul**. +1. From the Consul Overview, next to the snapshot you want toupdate, click **More** (three horizontal dots) and then **Update version**. +1. Select a version from the dropdown. Then, click **Update now**. + +The cluster's status changes to **Updating** as the process takes place. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/consul/upgrade/snapshots.mdx b/content/hcp-docs/content/docs/consul/upgrade/snapshots.mdx new file mode 100644 index 0000000000..6df5cfd0ad --- /dev/null +++ b/content/hcp-docs/content/docs/consul/upgrade/snapshots.mdx @@ -0,0 +1,66 @@ +--- +page_title: Restore clusters with snapshots +description: |- + This topic describes how to create snapshots of cluster, which you can use to restore a cluster to a previous state. Learn how to take a snapshot, restore a cluster, rename a snapshot, and delete a snapshot. +--- + +# Restore clusters with snapshots + +@include 'alerts/consul-dedicated-eol.mdx' + +This page describes the process to create and use snapshots, which are backup files for restoring Consul servers to a previous state. + +You can [access a HCP Consul Dedicated cluster](/hcp/docs/consul/dedicated/access) to interact with the [`/snapshot` HTTP API endpoint](/consul/api-docs/snapshot) or the [`consul snapshot` CLI command](/consul/commands/snapshot). Snapshots of HCP Consul Dedicated clusters created using the API and CLI appear in HCP Consul alongside manually created snapshots. + +## Create a snapshot on HCP Consul + +To create a snapshot of a cluster, complete the following steps: + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization or project where you created the cluster. +1. Click **Consul**. +1. From the Consul Overview, click the cluster ID you want to create a snapshot for. +1. Click **Snapshots** and then **Create snapshot**. +1. Enter a name for the snapshot. Then, click **Create snapshot**. + +The time it takes to create the snapshot depends on the size of the cluster. When the process is complete, the snapshot's status changes to **Ready**. + +## Restore a cluster + +To restore a cluster's Consul servers from a snapshot, complete the following steps: + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization or project where you created the cluster. +1. Click **Consul**. +1. From the Consul Overview, click the cluster ID you want to use to restore the cluster's state. +1. Click **Snapshots**. +1. Next to the snapshot you want to use to restore the cluster, click **More** (three horizontal dots) and then **Restore**. +1. Type **RESTORE** in the text box to confirm. Then, click **Restore snapshot**. + +By default, HCP creates a snapshot of the cluster before restoring it to a previous version. If you do not want a snapshot of the cluster in this state, uncheck the box before you confirm the restore. + +## Rename a snapshot + +You can edit the name of a snapshot that appears in the list of snapshots. However, renaming a snapshot does not change the underlying UUID that was assigned to the snapshot on its creation. + +To rename a snapshot, complete the following steps: + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization or project where you created the cluster. +1. Click **Consul**. +1. From the Consul Overview, click the cluster ID where you took the snapshot. +1. Click **Snapshots**. +1. Next to the snapshot you want to rename, click **More** (three horizontal dots) and then **Rename**. +1. Enter a new name for the snapshot. Then, click **Rename snapshot**. + +## Delete a snapshot + +To delete a snapshot, complete the following steps: + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization or project where you created the cluster. +1. Click **Consul**. +1. From the Consul Overview, click the cluster ID where you took the snapshot. +1. Click **Snapshots**. +1. Next to the snapshot you want to delete, click **More** (three horizontal dots) and then **Delete**. +1. Type **DELETE** in the text box to confirm. Then, click **Delete snapshot**. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/glossary.mdx b/content/hcp-docs/content/docs/glossary.mdx new file mode 100644 index 0000000000..db7a4dee2c --- /dev/null +++ b/content/hcp-docs/content/docs/glossary.mdx @@ -0,0 +1,207 @@ +--- +page_title: Glossary of Terms +sidebar_title: Glossary +description: |- + HashiCorp Cloud Platform Glossary. +--- + +# Glossary + +This page collects brief definitions of some of the technical terms used in the +documentation for HCP, HCP Consul, HCP Vault, and HCP Packer product families. + +- [Ancestors](#ancestors) +- [Ancestry](#ancestry) +- [Audit device log](#audit-device-log) +- [Base artifact](#base-artifact) +- [Bucket](#bucket) +- [Build](#build) +- [Channel](#channel) +- [Child](#child) +- [Descendants](#descendants) +- [Downstream build](#downstream-build) +- [Downstream artifact](#downstream-artifact) +- [Entity](#entity) +- [Golden image](#golden-image) +- [HCP Packer registry](#hcp-packer-registry) +- [HCP Packer registry data source](#hcp-packer-registry-data-source) +- [HCP Terraform provider](#hcp-terraform-provider) +- [HVN](#hvn) +- [Intra Region](#intra-region) +- [Inter Region](#inter-region) +- [Major Version](#major-version) +- [Minor Version](#minor-version) +- [Namespaces](#namespaces) +- [Organizations](#organizations) +- [Parent](#parent) +- [Seal](#seal) +- [Service API](#service-api) +- [Snapshots](#snapshots) +- [Tokenization service](#tokenization-service) +- [Tokens](#tokens) +- [Unseal](#unseal) +- [Version](#version) +- [Version fingerprint](#version-fingerprint) + +### Ancestors + +Upstream artifacts that an HCP Packer [bucket](#bucket) depends on directly or indirectly as source artifacts. + +### Ancestry + +In HCP Packer, ancestry refers to the relationship between source artifacts (parents) and their downstream child artifacts. The HCP Packer UI can display ancestry statuses that warn you when an artifact was built from an old version of one or more ancestors. Refer to the [Ancestry documentation](/hcp/docs/packer/manage/ancestry). + +### Audit Device Log + +Audit devices are the components in Vault that keep a detailed log of all requests and response to Vault. Because every operation with Vault is an API request/response, the audit log contains every authenticated interaction with Vault, including errors. + +To learn more, go through the [Access the audit log for troubleshooting](/vault/tutorials/cloud/vault-ops#access-the-audit-log-for-troubleshooting) section of the Vault Operation Tasks tutorial. + +### Base Artifact + +Base artifact refers to the artifact that other artifacts are built upon. For example, security teams may publish a base artifact that other teams in the organization must use as a starting point for their projects. This can also be referred to as a source artifact or parent artifact. + +### Bucket + +A bucket is a container within the [HCP Packer registry](#hcp-packer-registry) that stores artifact metadata from a single Packer template. Buckets contain one or more [version](#version). Reference the [bucket documentation](/hcp/docs/packer/store/create-bucket) for more details. + +### Build + +A build refers to the artifact metadata stored on the [HCP Packer registry](#hcp-packer-registry) from all artifacts produced by a single builder. Each artifact has a creation date and an ID that references the remote location of the artifact. Refer to the [metadata documentation](/hcp/docs/packer/store#builds) for more details. + + +### Channel + +Channels assign HCP Packer registry [version](#version) to human-readable names that consumers can reference in Packer templates and Terraform configurations. They allow consumers to automatically reference the correct artifact version on the registry without having to update their code. +Refer to the [channels documentation](/hcp/docs/packer/manage/channels) for more details. + +### Child + +In HCP Packer, child artifact refer to downstream [ancestors](#ancestors) that Packer builds directly from one or more [parent artifact](#parent). + +### Descendants + +Descendants are downstream artifact that Packer built directly or indirectly from a common [ancestor](#ancestors). For example, this includes all artifacts Packer built from the ancestor’s direct [children](#child). + +### Downstream artifact + +Downstream artifact refers to an artifact that is built from a specific source artifact, For example, an artifact containing specific application software may be built on top of a security golden image. This is often also called a child artifact. + +### Downstream build + +Downstream build refers to an individual build that is based on artifacts from a specific, pre-existing build. + +### Entity + +Entity represents a Vault client which has one or more aliases mapped. For +example, a single user who has accounts in both GitHub and LDAP can be mapped +to a single entity in Vault that has 2 aliases, one of type GitHub and one of +type LDAP. + +To learn more about entities, go through the [Identity: Entities and +Groups](/vault/tutorials/auth-methods/identity) tutorial. + +### Golden image + +Golden image refers to a pre-configured image that should be used as the source for instance creation in infrastructure. + +### HCP Packer registry + +The HCP Packer registry is a service that stores metadata about your artifacts, including when they were created, where the artifacts exists in the cloud, and what (if any) git commit is associated with your image build. This bridges the gap between image factories and image deployments, allowing development and security teams to work together to create, manage, and consume golden images in a centralized way. Reference the [HCP Packer registry docs](/hcp/docs/packer) for more details. + +In the HCP Packer UI, the Registry is where you can view all of the [buckets](#bucket) in your organization. + +### HCP Packer registry data source + +The HCP Packer registry data source enables you to query the [HCP Packer registry](#hcp-packer-registry) for an artifact to use as the source for a Packer build. Data sources are new to Packer as of last year, and only available in HCL templates. Refer to the [Metadata documentation](/hcp/docs/packer/store/reference) for more details. + +### HCP Terraform provider + +The HCP Terraform provider is the Terraform provider for HashiCorp Cloud Platform. Providers are plugins that allow Terraform to communicate with external APIs. The HCP Terraform provider includes the [`hcp_packer_version`](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/data-sources/packer_version) and [`hcp_packer_artifact`](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/data-sources/packer_artifact) data sources that you can use to query the [HCP Packer registry](#hcp-packer-registry) for an artifact to use in a Terraform configuration. Refer to the [reference metadata documentation](/hcp/docs/packer/store/reference) for more details. + +### HVN + +HashiCorp Virtual Networks. It delegates an IPv4 CIDR (classless inter-domain +routing) range to HCP which is then reflected on the cloud provider's virtual +network CIDR range. + +### Intra Region + +The resources are all located within the same cloud provider region. + +### Inter Region + +The resources are located across the different cloud provider regions. + +### Major Version + +Vault releases major functionality and features in their major version releases. +Examples of Vault major versions are 1.6, 1.7, etc. + +### Minor Version + +Minor versions releases of Vault contain bug fixes and small enhancements that +do not have an impact on backward compatibility. Minor versions are released +more frequently than major releases and provide a safe upgrade path for users. +Examples of minor versions include 1.6.0, 1.6.1, 1.7.0, etc. + +### Namespaces + +Namespaces is a set of features within Vault Enterprise that allows Vault +environments to support secure multi-tenancy within a Vault deployment. + +To learn more, go through the following tutorials: + +- [Multi-tenancy with Namespaces](/vault/tutorials/cloud/vault-namespaces) +- [Secure Multi-Tenancy with Namespaces](/vault/tutorials/enterprise/namespaces) + +### Organization + +An _organization_ is an entity in HCP that contains your resources, including [HashiCorp Virtual Networks (HVN)](/hcp/docs/hcp/network), registries, and server clusters. Organizations may also be referred to as _tenants_. + +### Parent + +In HCP Packer, parent artifacts refer to upstream [ancestors](#ancestors) that Packer uses as a direct source for one or more [child artifacts](#child). + +### Seal + +When a Vault server is started, it starts in a sealed state. In this state, Vault is configured to know where and how to access the physical storage, but doesn't know how to decrypt any of it. There is also an API to seal the Vault. This will throw away the master key in memory and require another unseal process to restore it. Sealing only requires a single operator with root privileges. + +To learn more, go through the [Seal the cluster](/vault/tutorials/cloud/vault-ops#seal-the-cluster) section of the Vault Operation Tasks tutorial. + +### Service API + +API server connected to the public internet. + +### Snapshots + +Vault enables users to take a snapshot of all Vault data. The snapshot can be used to restore Vault to the point in time when a snapshot was taken. + +To learn more about snapshots, go through the [Data snapshots](/vault/tutorials/cloud/vault-ops#data-snapshots) section of the Vault Operation Tasks tutorial. + +### Tokenization service + +Isolated encryption and decryption service. + +### Tokens + +Tokens are the core method for authenticating with Vault. Within Vault, tokens +map to information. The most important information mapped to a token is the +policies. Vault policies control access to secrets. + +To learn more about Vault tokens, go through the [Vault +Tokens](/vault/tutorials/tokens) tutorials. + +### Unseal + +Unsealing is the process of obtaining the plaintext master key necessary to read the decryption key to decrypt the data, allowing access to the Vault. Prior to unsealing, almost no operations are possible with Vault. + +To learn more, go through the [Unseal the cluster](/vault/tutorials/cloud/vault-ops#unseal-the-cluster) section of the Vault Operation Tasks tutorial. + +### Version + +A version is an immutable record of each successful `packer build` for a single template, stored on the [HCP Packer registry](#hcp-packer-registry). Each version may contain multiple [builds](#build), depending on how you configured sources in your template. Refer to the [Metadata documentation](/hcp/docs/packer/store) for more details. + +### Version Fingerprint + +A version fingerprint is a unique identifier for each [version](#version) stored on the [HCP Packer registry](#hcp-packer-registry). Refer to the [template configuration documentation](/hcp/docs/packer/store/push-metadata) for more details. diff --git a/content/hcp-docs/content/docs/hcp/admin/billing/flex-multiyear.mdx b/content/hcp-docs/content/docs/hcp/admin/billing/flex-multiyear.mdx new file mode 100644 index 0000000000..1c517d8015 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/admin/billing/flex-multiyear.mdx @@ -0,0 +1,103 @@ +--- +page_title: Flex Multiyear +description: |- + This topic provides an overview of Flex Multiyear contracts, including setup and activation. +--- + +# Flex Multiyear + +This topic provides an overview of Flex Multiyear contracts, including setup and activation. + +## What is Flex Multiyear? + +Flex Multiyear is a contract-based billing model that provides access to HCP services at a discount on list prices. +Available services include HCP Terraform, HCP Packer, HCP Vault Dedicated, HCP Vault Secrets, and HCP Boundary. + +With Flex Multiyear, you commit a specific amount of spend upfront that you have up to three years to use. +The balance from your committed spend is drawn down over time based on your consumption of HCP services. + +Flex Multiyear has a minimum contract size. For additional information, [contact sales](https://www.hashicorp.com/contact-sales). + +## Set up a Flex Multiyear contract + +During the sales process, you should provide your HashiCorp Account Manager with: +1. A technical contact who has the privileges necessary to complete the [self-activation process](#activate-your-contract) +1. If using HCP Terraform: the names of existing HCP Terraform organizations you intend to use with HCP on Flex Multiyear + +After completion of the sales process, the technical contract must complete the [self-activation process](#activate-your-contract) +to transition your intended HCP organization to the Flex Multiyear billing model. + +Consider [configuring a backup credit card](#configure-a-post-contract-payment-method) +to avoid disruption of service if your Flex Multiyear balance is depleted or contract expires +without another Flex Multiyear contract in place. + +### HCP Terraform and Flex Multiyear + + + +If you have HCP Terraform Organization(s) that have been operating on a contract type other than Flex, +please follow the +[HCP Terraform Organization Flex activation documentation](/terraform/cloud-docs/overview/activate-flex) +instead so that each HCP Terraform Organization is linked to your HCP Organization on Flex. +That Terraform-specific Flex activation documentation includes a step that directs +you to follow the general Flex activation steps below. + + + +To use your existing HCP Terraform organizations with HCP on Flex Multiyear, +you should provide your HCP Terraform organization name(s) to your account manager during the sales process. + +If you cannot edit your HCP Terraform Organization's plan as described in the +[Terraform-specific Flex activation documentation](/terraform/cloud-docs/overview/activate-flex#step-5-edit-the-hcp-terraform-organization-plan), +[contact support](https://support.hashicorp.com/hc/en-us/requests/new) with the +HCP Terraform Organization's details so they can enable you to edit its plan. + +### Activate your contract + +For an HCP organization not already on Flex, you must self-activate your Flex Multiyear contract +to transition your HCP organization to Flex. + +The self-activation process begins after the Flex Multiyear contract is signed. +HashiCorp will automatically generate a contract activation code that references your contract details +and email it to the technical contact specified to your HashiCorp Sales Account Manager during the sales process. +The technical contact must be a user within the applicable HCP organization with an +[organizational admin or billing admin role](/hcp/docs/hcp/iam/access-management#organization). + +Please follow the steps below to complete the activation process: +1. **Receive the activation email:** You (designated technical contract) will receive an email titled + "Action required: Activate your contract". The email contains information about the Flex Multiyear contract, + the contract activation code, and an "Activate contract" link that will take you to the HCP portal when clicked. +1. **Select the applicable HCP organization:** Click the email's "Activate contract" link to open the HCP portal. + You will be presented with a list of eligible HCP organizations. Select the HCP organization you intend to apply + the Flex Multiyear contract to. +1. **Enter the contract activation code:** On the "add activation code" page, enter the activation code found in the activation email. +1. **Review and confirm activation:** On the "confirm activation" page, double-check that the named HCP organization is + the correct organization to apply the Flex Multiyear contract to. You **cannot undo** or reassign an activated Flex Multiyear contract. + Once you are sure you have selected the correct HCP organization, click "Activate contract". +1. **Wait one hour:** After you click "Activate contract", you will be redirected back to the Account Summary page with a confirmation message. + You will also receive an email confirmation regarding the activation status. + Your HCP organization will transition to the Flex Multiyear billing model within the next hour. + At that time, your Flex Multiyear balance will be available to view on the HCP organization's dashboard. +1. **Complete HCP Terraform Flex activation if needed:** If you have existing HCP + Terraform Organizations and have not already associated them with your + HCP Organization on Flex, please follow the + [HCP Terraform Organization Flex activation documentation](/terraform/cloud-docs/overview/activate-flex). + +There is no self-activation process for Flex Multiyear recommit contracts. +The self-activation process is only needed for the initial transition of your HCP Organization +to the Flex Multiyear billing model. + +### Configure a post-contract payment method + +If your Flex Multiyear balance is depleted or contract expires without another Flex Multiyear contract in place, +your HCP organization will transition to [Pay-as-you-Go](/hcp/docs/hcp/admin/billing/pay-as-you-go) (PAYG). +If there is no credit card configured when a PAYG payment is due, your account will be considered delinquent. +If your account remains delinquent, we may suspend or terminate your resources consistent with the +[EULA for HashiCorp Cloud Software](https://eula.hashicorp.com/OnlineAgreements.pdf). + +Therefore, consider configuring a backup credit card now in case your HCP organization transitions to PAYG +at the end of a Flex Multiyear contract. To learn more about configuring a backup credit card, +refer to the [PAYG manage payment method](/hcp/docs/hcp/admin/billing/pay-as-you-go#manage-payment-method) documentation. + +To avoid a transition to PAYG entirely, [contact sales](https://www.hashicorp.com/contact-sales) +in advance of your Flex Multiyear balance approaching depletion or expiry. diff --git a/content/hcp-docs/content/docs/hcp/admin/billing/index.mdx b/content/hcp-docs/content/docs/hcp/admin/billing/index.mdx new file mode 100644 index 0000000000..5bb94fbf5a --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/admin/billing/index.mdx @@ -0,0 +1,94 @@ +--- +page_title: Billing Overview +description: |- + This topic provides an overview of HCP payment options, trial credits, and other billing-related information. +--- + +# Billing Overview + +This topic provides an overview of HCP payment options, trial credits, and other billing-related information. + +## Billing Models + +HCP organizations are billed based on the [usage](#usage) of generally-available HCP services. + +An HCP organization begins in the [trial](#trial) state, with usage limited by its available trial credits. +You should configure a payment method (credit card) to ensure continuity of service when trial credits are depleted. + +For ongoing use, you may either: +- Use a [Pay-as-you-Go](#pay-as-you-go) plan by configuring a payment method (credit card) +- For larger customers, contact HashiCorp sales to enter a [Flex Multiyear contract](#flex-multiyear-contract) + +To view your current billing information, log into [the HCP Portal](https://portal.cloud.hashicorp.com/) +and navigate to **Billing** via the billing summary tile on your Project or Organization dashboard. + +### Trial + +The trial billing model is the default state when you first create your HCP organization. +The HCP organization is automatically granted $500 in trial credits to apply towards HCP services, +enabling you to try before you buy. Trial credits have a 6 month expiration. + +We recommend adding a credit card during the setup process to ensure continuity of service when credits are depleted. +Trial credits will always be deducted before accuring charges against the credit card, except for HCP organizations on a contract. +If trial credits are depleted without a payment method on file, your HCP services are terminated, +and you are prevented from using services until you add a payment method. + +You can view your remaining trial credits or configure a credit card for your HCP Organization at any time +from the **Billing** tile in your organization's dashboard in the [the HCP Portal](https://portal.cloud.hashicorp.com/), +as shown in the following image. +![Org Details Page With Trial Billing Status](/img/docs/trial-status-org-details.png) + +Until a payment method is provided, your HCP organization will remain in **Trial status**. +This status is visible from your HCP organization's billing account summary page. +While in trial status, your HCP organization is limited to a single Vault Dedicated or Consul Dedicated cluster. +![Billing Account Summary Page With Trial Billing Status](/img/docs/trial-status-account-summary.png) + +HCP services may have limitations while operating in trial status. For example: +- HCP Vault Dedicated and Consul Dedicated allow only a single cluster to be created. +- For other HCP services, you are limited to a maximum of three resources of that service type. + A resource is a deployment of an HCP service, such as an HCP Boundary cluster or HCP Packer registry. + +### Pay-as-you-Go + +Pay-as-you-Go (PAYG) is a no commitment billing model that provides access to HCP services at list prices. + +You transition from trial to PAYG by [adding a credit card as a payment method](/hcp/docs/hcp/admin/billing/pay-as-you-go#add-a-credit-card). +Your organization's accrued usage for the month will be charged on the first day of the following calendar month. +HCP sends an invoice copy to the email address specified when setting up the credit card. +You can change the email address by editing the credit card details in the 'Billing' tab. +Invoices and receipts are also available to download from the billing section of the HCP portal. + +If your organization has remaining trial credits, those will be drawn down before charges accrue to your credit card. + +If the credit card on file is expired or unable to process payment, the following will occur: +- HCP will freeze your ability to deploy additional resources. +- We will retry payment several times over the next few weeks and notify you at the specified billing email address. +- If your account remains delinquent, we may suspend or terminate your resources consistent with the + [EULA for HashiCorp Cloud Software](https://eula.hashicorp.com/OnlineAgreements.pdf). + +### Flex Multiyear Contract + +Flex Multiyear is a contract-based billing model that provides access to HCP services at a discount on list prices. + +With Flex Multiyear, you commit a specific amount of spend upfront that you have up to three years to use. +The balance from your committed spend it drawn down over time based on your consumption of HCP services. +If the balance is depleted or the contract expires without another Flex Multiyear contract in place, +your HCP organization will transition to [Pay-as-you-Go](#pay-as-you-go) and require configuring a credit card for payment. + +For additional information, refer to the [Flex Multiyear documentation](/hcp/docs/hcp/admin/billing/flex-multiyear) +or [contact sales](https://www.hashicorp.com/contact-sales). + +## Usage + +HCP services are deployed as resources that are billed based on usage, as elaborated in the +[Pricing Definitions documentation](/hcp/docs/hcp/admin/billing/pricing-definitions). + +For the trial and PAYG billing models, the rate for usage of each service is shown in the HCP Portal +on the organization's billing > pricing page. +For the Flex Multiyear billing model, the rates are contained in your contract. +Displayed prices are exclusive of taxes unless otherwise stated. + +The HCP billing account summary page shows your organization's running usage for the month +and a summary of charges accrued thus far by [project](/hcp/docs/hcp/admin/projects) and resource. +The following image shows an example of usage for an organization with a Flex billing model reflected in the HCP billing account summary screen. +![Flex Billing Account Summary Page](/img/docs/billing-flex-account-summary.png) diff --git a/content/hcp-docs/content/docs/hcp/admin/billing/pay-as-you-go.mdx b/content/hcp-docs/content/docs/hcp/admin/billing/pay-as-you-go.mdx new file mode 100644 index 0000000000..a1c6fc9d86 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/admin/billing/pay-as-you-go.mdx @@ -0,0 +1,82 @@ +--- +page_title: Pay-as-you-Go +description: |- + This topic provides an overview of the pay-as-you-go (PAYG) billing model, including how to add, change, and remove a credit card payment method. +--- + +# Pay-as-you-Go (PAYG) + +This topic provides an overview of the pay-as-you-go (PAYG) billing model, +including how to add, change, and remove a credit card payment method. + +## PAYG Overview + +Pay-as-you-Go (PAYG) is a no commitment billing model that provides access to HCP services at list prices. + +To move from the [trial billing model](/hcp/docs/hcp/admin/billing#trial) to PAYG, [add a credit card as a payment method](/hcp/docs/hcp/admin/billing/pay-as-you-go#add-a-credit-card). + +Your organization's accrued usage for the month will be charged on the first day of the following calendar month. +HCP sends an invoice copy to the email address specified when setting up the credit card. +You can change the email address by editing the credit card details in the 'Billing' tab. +Invoices and receipts are also available to download from the billing section of the HCP portal. + +If your organization has remaining [trial](/hcp/docs/hcp/admin/billing#trial) credits, +those will be drawn down before charges accrue to your credit card. + +If the credit card on file is expired or unable to process payment, the following will occur: +- HCP will freeze your ability to deploy additional resources. +- We will retry payment several times over the next few weeks and notify you at the specified billing email address. +- If your account remains delinquent, we may suspend or terminate your resources consistent with the + [EULA for HashiCorp Cloud Software](https://eula.hashicorp.com/OnlineAgreements.pdf). + +## Manage payment method + +PAYG requires a credit card as a payment method. A credit card can be added, changed, or removed from the HCP portal. +HCP organizations on a Flex Multiyear contract can also configure a credit card as a backup in case they transition to PAYG in the future. + +### Add a credit card + +1. Open your Project or Organization dashboard and click on the **View billing** link in the Billing summary tile. +1. Click **Payment methods** in the sidebar. +1. Click **Add credit card** and enter your billing information when prompted. + + The following image shows the interface for adding a credit card in the HCP organization screen. + ![Add a credit card in the billing screen from your organization's dashboard](/img/docs/billing-add-cc-org-dashboard.png 'Billing Tab') + The following image shows the interface for adding a credit card in the account summary screen. + ![Add a credit card in the billing screen from your account summary dashboard](/img/docs/billing-add-cc-account-summary.png 'Billing Tab') + +### Change credit card + +1. Navigate to the Billing page for your organization via the Billing summary tile in the Project or Organization dashboard. +1. Open the Billing page and click **Payment methods** in the sidebar. +1. Select **Edit credit card** from the **Manage** menu. +1. Update the billing information when prompted. + +### Remove credit card + +1. Delete any resources currently in use, such as Consul or Vault clusters and Packer registries. +1. Navigate to the Billing page for your organization via the Billing summary tile in the Project or Organization dashboard. +1. Click **Payment methods** in the sidebar. +1. Select **Remove credit card** from the **Manage** menu. + + The following image shows the interface for removing a credit card in the HCP payment methods screen. + ![Billing Payment Methods Page Remove Credit Card](/img/docs/billing-remove-cc-dropdown.png) + +HCP generates your final invoice and the remainder of your usage. If your organization does not have any remaining credits or another payment method, you will not be able to deploy any new paid resources. + +## Understanding your payment status + +To find the payment status for each monthly PAYG plan statement, navigate to your organization's "Monthly summaries" page. + +1. From the "Organization overview," click **Billing**. +1. Click **Monthly summaries** . + +The following list describes each payment status that may appear when reviewing your pay-as-you-go monthly summaries: + +- **Payment Due**: This status indicates that there is an outstanding amount that needs to be paid by the end of the billing cycle. Your card on file will be automatically charged at the end of the month. +- **Good Standing**: This status indicates that the statement has been paid in full and there are no outstanding charges. +- **Overdue**: This status indicates that the payment for your statement was not received by the due date. Ensure you address overdue payments to avoid any interruptions to your services. +- **Void**: This status indicates that the transaction was canceled or invalidated. No action is required from your end. +- **Payment Pending**: This status indicates that a payment is in progress. After the payment is processed successfully, your status will be updated to `Good Standing`. + +If you have any questions or concerns about your billing status, [contact support](https://support.hashicorp.com/hc/en-us/requests/new). diff --git a/content/hcp-docs/content/docs/hcp/admin/billing/pricing-definitions.mdx b/content/hcp-docs/content/docs/hcp/admin/billing/pricing-definitions.mdx new file mode 100644 index 0000000000..92ec4b4f8d --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/admin/billing/pricing-definitions.mdx @@ -0,0 +1,206 @@ +--- +page_title: Pricing Definitions +description: |- + This topic provides definitions for HCP usage-based pricing that applies to the trial, PAYG, and Flex Multiyear billing models. +--- + +# Pricing Definitions + +This topic provides definitions for HCP usage-based pricing that applies to the trial, PAYG, and Flex Multiyear billing models. + +## Usage-Based Pricing + +Usage-based pricing applies to the trial, PAYG, and Flex Multiyear billing models. + +On your HCP organization's billing model, every available HCP service has one or more billing metrics. +These billing metrics are metered by incremental count or by time, +with usage and costs accrued from the start of the month updated hourly in the billing page with up to a 30 minute delay. +For example, by 10:30 am, the usage summary and balance will reflect charges for hours 9 am to 10 am. + +The usage summary values are estimates until the statement is finalized at the end of the month. +This allows for HCP to backfill any missing data. + +## Billable Metrics + +Each HCP service has: + +- One or more billable metrics +- One or more user-selected service attribes that affect the unit price per billable metric, such as edition (Essentials, Standard, Premium) or cluster size + +Each billable metric has: + +- A name and definition +- A unit price per rating period + +The unit price is exclusive of taxes unless otherwise specified. +The rating period for the unit price is either: + +- **Hourly:** Each unit during the rated hour accrues the stated hourly cost. +- **Monthly:** Each incremental unit during the rated hour accrues the stated monthly cost. The unit count resets to zero at the start of each month. + +The sections below provide the billable metric definitions for HCP services. + +### HCP Terraform + +#### Definitions + +- **HCP Terraform Organization (HCP Terraform Org):** A unique, top-level entity of HCP Terraform that contains resources, users, and administrative policies. + It serves as an access control boundary that isolates resources for security. + One or multiple HCP Terraform Organization(s) can be associated with a single HCP Org and the associated HCP Org serves as a billing boundary for costs. +- **HCP organization (HCP Org):** A unique, top-level entity that contains resources, users, and administrative policies + that serves as an access control boundary that isolates resources for security and a billing boundary for costs. +- **Resource:** A resource block in a HCP Terraform configuration and declares one or more infrastructure objects. + Each infrastructure object, including but not limited to “Local Only” Resources, counts as a single Resource. +- **Resource Under Management (RUM)** or **Managed Resource:** A Resource in a HCP Terraform managed state file + starting from the first time a Terraform plan or Terraform apply run is performed on the Resource, and/or the resource is provisioned, + and where mode = `managed` in the state file. "Null Resources" are excluded from the Managed Resource. +- **Null Resources:** A specific type of Resource defined as `null_resource` or `terraform_data` resource in a HCP Terraform managed state file + that are part of the standard resources under management lifecycle, but intentionally take no action. +- **Hourly Peak Managed Resources:** The sum of the maximum number of concurrent "Managed Resources" of each HCP Terraform Org that exist in an hourly billing period. + +#### Pricing + +Usage-based billing charges for HCP Terraform usage of Resources Under Management (RUM) based on the total number of **Hourly Peak Managed Resources** +aggregated across all HCP Terraform Orgs linked to a single HCP org, within a billing hour. +The rate of each RUM is determined by the product edition. + +### HCP Packer + +#### Definitions + +- **HCP Packer Image Bucket:** A unique HCP Packer configuration file that is being tracked by the HCP Packer Registry. + The tracked configuration file can define one or multiple versions of an image. +- **HCP Packer Registry:** Tracks each artifact's metadata from configuration files. + Such metadata is available in the user interface and via API, which makes the metadata accessible outside of the HCP Packer Registry. + HCP Packer can track data from images across clouds but does not host any image templates or image artifacts themselves. +- **HCP Packer User:** Any individual with access to a customer’s HCP Packer Registry. +- **HCP Packer Request:** Any time a Customer queries the HCP Packer get channel API to fetch the assigned iteration, + including but not limited to using the Terraform data source. + - All other requests, including requests generated by the HCP Packer user interface, shall not apply for billing purposes. + +#### Pricing + +Usage-based billing charges for HCP Packer usage of: + +- **HCP Packer Image Buckets:** An HCP Packer Image Bucket is charged on a per-hour basis, rounded up to the nearest hour, + from the time its metadata is stored in the registry until the metadata is deleted. + The hourly rate of each HCP Packer Image Bucket is determined by the product editions (i.e., Essentials, Standard) + and the total number of HCP Packer Image Buckets within a billing hour. +- **HCP Packer Image Requests:** An HCP Packer Request is charged per HCP Packer Request made. + The rate of each HCP Packer Request is determined by the product edition. + +### HCP Vault Dedicated + +#### Flex Multiyear Exclusions + +HCP Vault Dedicated's Starter edition is not available on Flex Multiyear. + +#### Definitions + +- **HCP Vault Dedicated Cluster(s):** Agrouping of fully managed virtual machines used to run HCP Vault Dedicated. + - HCP Vault Dedicated Clusters are charged on a per-hour basis, from the time a cluster is created until it is deleted. + Partial hours are charged by the minute. + The hourly rate is based on the product edition (i.e., Starter, Development, Essentials, Standard) and the "Cluster Size" (i.e., Small, Medium, Large). +- **HCP Vault Dedicated Client(s):** Unique applications, services, and/or users that consume HCP Vault Dedicated. + - For the purposes of billing and consumption, only unique and active HCP Vault Dedicated Clients during each monthly billing period are counted towards totals. + Within a monthly billing period, each HCP Vault Dedicated Client is counted once, no matter how many times it has been active. + Once an HCP Vault Dedicated Client has authenticated to a cluster, those HCP Vault Dedicated Clients have unlimited access to that cluster for the remainder of the month. + A Secret with a configured Secret Sync Destination is counted as a unique and active Client. + A Secret Sync Destination represents the external secrets management location where a Secret may be synchronized. + - For the purposes of billing and consumption, a single HCP Vault Dedicated Client which authenticates to multiple HCP Vault Dedicated Clusters + (excluding HCP Vault Dedicated Development and Starter edition Clusters) will be counted as multiple Clients. + +#### Pricing + +Usage-based billing charges for HCP Vault Dedicated usage of: + +- **HCP Vault Dedicated Clusters:** Charged on a per-hour basis, from the time a cluster is created until it is deleted. + Partial hours are charged by the minute. + The HCP Vault Dedicated Cluster hourly rate is based on the product edition and the cluster size (i.e., Extra Small, Small, Medium, Large). +- **HCP Vault Dedicated Clients:** Charged when a unique HCP Vault Dedicated Client becomes active during the billing month. + HCP Vault Dedicated Development edition allows a max 25 HCP Vault Dedicated Clients per month and is eligible for Silver Support but excluded from Sev-1. + +### HCP Vault Secrets + +#### Definitions + +- **Secrets:** Sensitive data and information for which access is controlled and managed by Customer's deployment of HashiCorp Products. + Secrets may include, but are not limited to: tokens, API keys, passwords, encryption keys, or any type of credential. + Secrets are unique name and value pairs enabling cloud native applications to connect with databases, SaaS services, and other third party systems. +- **API Call(s):** An action aligned to APIs that enable fetching static, auto-rotated or dynamic secret values (individual or bulk) + through the various interfaces such as User Interface (UI), Command-Line-Interface (CLI) or directly with the API itself. + +#### Pricing + +Usage-based billing charges for HCP Vault Secrets usage of: + +- **Secrets:** Each Secret is charged on a per-hour basis, from the time a Secret is created until it is destroyed. + The hourly rate is based on the total number of Secrets within each edition, within a single HCP org, within a billing hour. + Each partial hour is billed as a full hour. +- **API Call(s):** An HCP Vault Secrets API Call is charged per API Request made. + The rate of each API Call is determined by the product edition. + +### HCP Boundary + +#### Definitions + +- **HCP Boundary User:** Any human or machine identity taking an authenticated action to HCP Boundary. + Service or non-human accounts cannot be used to artificially reduce HCP Boundary Users. + +#### Pricing + +Usage-based billing charges for HCP Boundary usage of HCP Boundary Users. +An HCP Boundary User is counted once for each month in which the User takes an authenticated action to HCP Boundary. +The monthly rate of each HCP Boundary User is determined by the product edition. + +### HCP Vault Radar + +#### Definitions + +- **Active Contributors:** An HCP Vault Radar Monthly Active Contributor is a human or machine identity that contributed to a scanned Data Source within a measured month. HCP Vault Radar Monthly Active Contributor will be measured by counting the user contributions per Data Source. Service or non-human accounts cannot be used to artificially reduce HCP Vault Radar Monthly Active Contributors. +- **Data Source:** The application, database, file, storage location, or resources at a Host or Server level that will be scanned for secrets discovery by any variant of HCP Vault Radar. +- **Resource Block:** Up to 10,000 repositories, databases or component data locations that make up a data source. + +#### Pricing + +Usage-based billing charges for HCP Vault Radar usage of Active Contributors. Available only on Flex Multiyear. + +### HCP Consul Dedicated + +@include 'alerts/consul-dedicated-eol.mdx' + +#### Flex Multiyear Exclusions + +HCP Consul Dedicated is not available on Flex Multiyear. + +#### Definitions + +- **Consul Agent:** The long running daemon on every member of the Consul Cluster + and such Agent is able to run in either client or server mode. +- **Consul Client:** An agent that typically interfaces with the Consul Server and requires minimal infrastructure resources. + Clients are hosted in the Customer's environment. +- **Consul Cluster:** Multiple Consul Servers that form a control group that has a + single Consul Server Leader as well as a pool of Consul Clients that host and run a user's workloads. +- **Consul Server Leader:** A Consul Server within a Consul Cluster which is + responsible for ingesting new log entries, replicating to other Consul Servers in the same Consul Cluster, + and managing when an entry is considered committed. + Only one Consul Server can act as the Consul Server Leader in a single Consul Cluster. +- **Consul Server:** A Consul Agent with an expanded set of technical responsibilities such as + participating in a Raft quorum, maintaining cluster state, responding to RFC queries, + exchanging WAN gossip with other Consul Clusters, and forwarding queries to Consul Server Leader or remote Consul Clusters. + Consul Server is hosted in HashiCorp's environment. +- **Consul Service:** A logical representation of an application or microservice that is registered in Consul. +- **Consul Service Instances:** One or more running versions of a given Consul Service, + each tracked as a distinct provider of the Consul Service by Consul's service registry. + +#### Pricing + +Usage-based billing charges for HCP Consul Dedicated usage of: + +- **Consul Clusters:** Charged on a per-hour basis, from the time it is created until the time it is deleted. + Partial hours are charged by the minute. The hourly rate is based on the product edition (i.e., Development, Essentials, Standard & Premium), + "Cluster Size" (i.e., Small, Medium, Large), cloud provider, and cloud region. +- **Consul Service Instances:** Charged by the minute. + The minutely rate is based on the total number of "Service Instances" aggregated across all Consul Clusters, within an HCP Org, within a billing minute. + For the purposes of billing and consumption, the Service Instance(s) which are registered with HCP Consul Dedicated Development edition Clusters + are excluded from the aggregation. diff --git a/content/hcp-docs/content/docs/hcp/admin/orgs.mdx b/content/hcp-docs/content/docs/hcp/admin/orgs.mdx new file mode 100644 index 0000000000..4472e95307 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/admin/orgs.mdx @@ -0,0 +1,49 @@ +--- +page_title: Organizations +description: |- + Create and manage an organization in HashiCorp Cloud Platform (HCP). +--- + +# Organizations + +This page describes how to create and manage an organization in HashiCorp Cloud Platform (HCP). + +## Introduction + +An _organization_ is a top-level entity in HCP for organizing resources. It contains one or more +[HCP projects](/hcp/docs/hcp/admin/projects), which separate access to resources such as [HashiCorp Virtual Networks (HVN)](/hcp/docs/hcp/network) according to [user permissions](/hcp/docs/iam/users#user-permissions). + +Users can be a member of multiple organizations if invited by the admin of other organizations. However, you can only create and own one organization for your HCP account. + +An organization can have up to 100 projects. + +## Create an organization + +When you sign up for a HashiCorp Cloud Platform (HCP) account, [the HCP Portal](https://portal.cloud.hashicorp.com/) takes you to a guided worfklow. + +1. Select the type of organization you want to create, either **Business** or **Personal**. +1. Specify the name, address, and country of origin for your organization. The name must be unique. If another organization is already using the name, you will recieve a prompt to choose a different one. +1. Accept the terms and conditions. Then click **Create organization**. + +After you create your organization, you can [invite users to your organization](/hcp/docs/hcp/admin/users#invite-users) and start creating HCP resources. + +## Locate the organization ID + +To locate the organization ID: + +1. At the bottom left, click the name of the current organization to open the organization and project selector. +1. Select an organization to open the organization's dashboard. +1. From the organization's dashboard, click **Organization settings**. +1. To copy the **Organization ID**, click the clipboard icon next to the ID. + +## Manage an organization + +To change your organization's name: + +1. Sign in to [the HCP Portal](https://portal.cloud.hashicorp.com/). +1. From the organization's dashboard, click **Organization settings**. +1. At the top-right, click **Manage**, and then click **Rename organization**. +1. Enter a new organization name. The name must contain between 3 and 40 characters, and it may include ASCII letters, numbers, hyphens, and underscores. The name must be unique. If another organization is already using the name, you will receive a prompt to choose a different one. +1. Click **Save**. + +You may encounter an `Organization name update failed` error when managing an HCP organization from an HCP Terraform workspace. Refresh the organization's settings page from HCP Terraform, and the name change should take effect. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/admin/projects/index.mdx b/content/hcp-docs/content/docs/hcp/admin/projects/index.mdx new file mode 100644 index 0000000000..2b42ba4569 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/admin/projects/index.mdx @@ -0,0 +1,100 @@ +--- +page_title: Projects +description: |- + Create and manage projects under each organization in HashiCorp Cloud Platform (HCP). +--- + +# Projects + +Projects are lightweight containers for resources or use cases that require similar access. An organization contains one or more projects. HCP resources such as [HashiCorp Virtual Networks +(HVN)](/hcp/docs/hcp/network) and server clusters reside within Projects. + +Use projects to segment access within an organization. For example, projects can separate teams, use cases, or environments, such as development, staging, and production. The billing summary reports usage per project. + +Here are important characteristics about HCP projects: + +- _Global_ [HCP service quotas](/hcp/docs/hcp/admin/support#service-quotas) remain at the + organization level and they are not enforced per project. + +- An [organization](/hcp/docs/hcp/admin/orgs) can contain 1 or more projects. + + + + Refer the [HCP + Support](/hcp/docs/hcp/admin/support) page to learn more about the service + quotas. + + + +- HCP resource names (e.g. cluster name) are unique per project and not per + organization. + +- You cannot deploy an HCP Vault Dedicated or HCP Consul Dedicated cluster if an + HVN belongs to a different project. + +- To delete a project, all resources under the project must be deleted or + deactivated first. See the [manage resources](#manage-resources) section. + +### Use Cases +Taking advantage of segregating access within your organization via projects is the best way to enforce least privileged access. Deploying all HCP services or resources within one project, can lead to several unintended consequences. + +- Increased likelihood of over privileging identities within the project +- Project billing invoices may become less useful due to the high amount of resource types and use cases represented within the project. +- Self Service use cases become harder to support over time because of the challenges of isolating access and control among many disparate identities in one project. + +## Create a project + +Users with organization contributor, admin, or owner roles can create new +projects. If an organization contributor creates a new project, the user +automatically becomes the admin of that project. (Refer to the [User +Permissions](/hcp/docs/hcp/admin/users#user-permissions) for information about +the roles you can assign.) + +1. Log into [HCP Portal](https://portal.cloud.hashicorp.com/) and choose your + organization. + + + + If you have logged in before, the portal opens the last project you were in. + Navigate back to the organization level from the breadcrumbs, or click on the + HashiCorp icon at the top-left to choose your organization. + + + +1. Select **Projects** in the sidebar. + +1. Click **+ Create project**. + +1. Enter the **Project name** and **Project description**. + +1. Click **Create project** to complete. + + +## Manage projects + +Users with project admin role can edit the existing project name and +description, or delete the project. (Refer to the [User +Permissions](/hcp/docs/hcp/admin/users#user-permissions) for information about +the roles you can assign.) + +1. Log into [HCP Portal](https://portal.cloud.hashicorp.com/) and choose your + organization. + +1. Select **Projects** in the sidebar. + +1. Expand the menu next to the project you wish to modify, and select **Edit + project** to edit the project name or description, or select **Delete** to + delete the project. + ![Projects overview](/img/docs/hcp-core/project-menu.png) + +1. Select **View project** will take you to the project setting page where you + can find the **project ID**. + + +## Manage resources + +![HCP Organization Structure](/img/docs/hcp-core/diagram-hcp_organization_project-resources.png) + +A resource is any item that the access management system controls access to. Examples of resources are a HCP Vault Dedicated cluster, HCP Packer Bucket, HashiCorp Virtual Network (HVN) or a HCP Vault Secret App. The **Active Resources** page lists all resources created in the project. To delete a project, all resources must be deleted. If an resource exists, HCP will block users from deleting the project. This page helps you to identify what resources are still in the project. + +![Active Resources](/img/docs/hcp-core/active-resources-page.png) \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/admin/projects/webhooks.mdx b/content/hcp-docs/content/docs/hcp/admin/projects/webhooks.mdx new file mode 100644 index 0000000000..57c9022900 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/admin/projects/webhooks.mdx @@ -0,0 +1,143 @@ +--- +page_title: Create and manage webhooks +description: |- + Create and manage webhooks to notify external systems about project resource's lifecycle events. +--- + +# Create and manage webhooks + +This topic describes how to implement webhooks in HashiCorp Cloud Platform (HCP) that notify external systems about a project resource's lifecycle events. + +## Viewing and managing webhooks + +Click on a project in the HCP sidebar and choose **Project settings > Webhooks** to open the **Webhooks** page. The page shows any existing webhooks. + +## Creating a webhook + +Complete the following steps to create a webhook: + +1. Click **Project settings > Webhooks**. The **Webhooks** page appears. +1. Click **Create webhook**. The **Create webhook** page appears. +1. Configure the following fields: + - **Name:** Specify a name for the webhook. This field is required. + - **Description:** Add a description for the webhook. Descriptions are useful for helping others understand the purpose of the webhook. This field is optional. + - **Webhook URL:** Specify the destination for the webhook payload. The destination must accept HTTP or HTTPS `POST` requests and should be able to use the [payload](#webhook-payload). This field is required. + - **Token:** Specify an arbitrary secret string that HCP uses to sign its webhook requests. Refer to [Webhook Authenticity][inpage-hmac] for details. You cannot view the token after you save the webhook configuration. This field is optional + - **Events:** Specify the events that you want to send to the destination specified in the Webhook URL field. You can send payloads for all events or only for specific events. The events that are available to send are specific to the services that own the resources. Refer to the service's webhook documentation for events type and payloads. +1. Click **Create webhook**. + +## Enabling and verifying a webhook + +To enable or disable a webhook: + +1. Click **Project settings > Webhooks**. The **Webhooks** page appears. +1. Click the ellipses menu next to the webhook you want to manage. Depending on the current state of the webhook, one of the following options appears: + - **Enable webhook:** Enables the webhook. HCP attempts to verify the webhook configuration by sending a [verification payload](#verification-payload), and only enables the webhook if the verification succeeds. + For a verification to be successful, the destination must respond with an HTTP response code in the 200 - 299 range. If verification fails, HCP displays the error message and the configuration remains disabled. + - **Disable webhook:** Disables the webhook. HCP stops delivering payloads to the destination URL. + + +## Webhook payload + +Webhook payloads contain the following information: + +- **Resource ID:** The ID of the resource the event is related to. +- **Resource name:** The resource name of the resource the event is related to. +- **Event ID:** The unique identifier for the event generated by the services with the format `.event:`. For example, `packer.event:t79BRg8WhTmDPBRM`. +- **Event action:** The type of action of this event. For example, `create`. +- **Event description:** The event description. For example, `Created version`. +- **Event source:** The source of the event. For example, `hashicorp.packer.version`. Event source might not be the same type as the resource that the webhook is subscribed to if the event is from a descendant resource. + For example, webhooks are subscribed to a `hashicorp.packer.registry` and receive events for descendent resources such as a `hashicorp.packer.version`. +- **Event version:** The version of the event payload that is being sent. +- **Event payload:** The payload with the information about the resource's lifecycle event. + +The service that owns the resource writes the payload. Refer to the service's webhook documentation linked in [HCP Webhook Events Documentation](#hcp-webhook-events-documentation) for details about specific payloads. + +Third-party services may require additional fields that are outside of the HCP webhook payload. To integrate with these types of services, you must integrate a middleware webhook capable of translating payloads. The middleware must respond `200 OK` to HCP webhook requests and forward the translated payload to the third-party destination URL. Refer to the documentation for your third-party services for assistance. + +The following example payload is from the `hashicorp.packer.version` resource: + +```json +{ + "resource_id": "01HAVMCV8XWW945TNKT2KPYSN1", + "resource_name": "packer/project/ff99bac7-eaec-40a1-8f55-5eb05e789401/registry/01HAVMCV8XWW945TNKT2KPYSN1", + "event_id": "packer.event:MtCpPwmkdPpD8qqfMRhJ", + "event_action": "create", + "event_description": "Created version", + "event_source": "hashicorp.packer.version", + "event_version": "1", + "event_payload": { + "actor": { + "principal_id": "ac7295a2-85ef-4594-b4c6-3a1f8b733f1a", + "type": "TYPE_USER", + "user": { + "email": "user@email.com", + "id": "d8f45791-460d-434e-8a40-f627e752276a", + "name": "User Name" + } + }, + "bucket": { + "id": "01HAVMDEAXNF5RYDDSK5R39HDP", + "slug": "test" + }, + "version": { + "fingerprint": "01HAVMD1YBM4PA1KHNYFAYJREM", + "id": "01HAVMD63G58XDA8JKS2B8J871", + "revocation_author": "", + "revocation_message": "", + "revoke_at": "", + "status": "RUNNING", + "version": "v0" + }, + "organization_id": "6a171c1d-c7cd-4047-ba1a-92d686dde2ed", + "project_id": "ff99bac7-eaec-40a1-8f55-5eb05e789401", + "registry": { + "id": "01HAVMCV8XWW945TNKT2KPYSN1" + }, + } +} +``` + +### Verification payload + +HCP verifies that the webhook configuration is valid before creating, enabling, or updating a webhook URL and token. For the verification to be successful, the destination must respond to the verification payload with an HTTP response code in the 200 - 299 range. + +```json +{ + "event_id": "webhook.event:mlizg1TCaSsrJ2hOXZMmS", + "event_action": "test", + "event_description": "Verification", + "event_source": "hashicorp.webhook.verification", + "resource_id": "", + "resource_name": "", + "event_version": "1", + "event_payload": {} +} +``` + +The `event_id` is unique to each verification payload. + +## Webhook authenticity + +[inpage-hmac]: #webhook-authenticity + +For webhook configurations that include a secret token, HCP webhook requests include an `X-HCP-Webhook-Signature` header, which contains an HMAC signature computed from the token using the SHA-512 digest algorithm. +The receiving service is responsible for validating the signature. + +The following example verifies the HMAC using Ruby: + +```ruby +token = SecureRandom.hex +hmac = OpenSSL::HMAC.hexdigest(OpenSSL::Digest.new("sha512"), token, @request.body) +fail "Invalid HMAC" if hmac != @request.headers["X-HCP-Webhook-Signature"] +``` + +## Managing webhooks with Terraform + +You can create and manage webhooks using the HCP Terraform provider. For more information, see the HCP Terraform provider [hcp_notifications_webhook resource documentation](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/notifications_webhook). + +## HCP Webhook Events Documentation + +For more information about the events that are available to send, refer to the service's webhook documentation. + +- [HCP Packer Webhook Events](/hcp/docs/packer/reference/webhook) diff --git a/content/hcp-docs/content/docs/hcp/admin/support.mdx b/content/hcp-docs/content/docs/hcp/admin/support.mdx new file mode 100644 index 0000000000..1152a90e2f --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/admin/support.mdx @@ -0,0 +1,70 @@ +--- +page_title: Support +description: |- + This topic introduces support-related information about using HashiCorp Cloud Platform (HCP), including service level agreements, available support plans, limitations, and service quotas. +--- + +# HCP support + +This topic introduces support-related information about using HashiCorp Cloud Platform (HCP), including service level agreements, available support plans, limitations, and service quotas. + +## Support plans + +The Enterprise Support Plan for HCP offers a number of support levels and response times based on your [cluster tier](https://cloud.hashicorp.com/pricing) or contract terms. + +Refer to the [Enterprise Support Plan](https://www.hashicorp.com/customer-success/enterprise-support) or [HCP Pricing](https://cloud.hashicorp.com/pricing) page for details. + +## Service Level Agreement + +HashiCorp's Cloud Service Level Agreement (SLA) applies to our cloud services, exclusively. HashiCorp's Cloud services SLA does not apply to any other product or products offered by HashiCorp. +Cloud SLAs apply to all production-grade (non-development tier) clusters. + +Refer to the [HashiCorp Cloud SLA](https://cloud.hashicorp.com/sla) for details. + +## Service quotas + +HCP enables customers to provision a finite number of resources, such as HVNs and clusters, immediately after sign up. + +HCP service quotas are the maximum number of resources that can be provisioned in your HCP account. +HCP, as well as individual services that run on the platform (e.g., HVNs, Consul clusters, etc.), enforce various quotas. +Quotas apply globally, but individual service quotas are applied at the **organization** level with an exception of HCP Boundary cluster quota. (See the table below.) + +The following table describes the quotas and default values. + +| Service | Service Quota Type | Default Value | Regional or Global | Adjustable Value | +| ------------------------ | -------------------------------------------------------------------------------------------------- | ----------------- | ------------------ | ---------------- | +| Hashicorp Cloud Platform | [Projects](/hcp/docs/hcp/admin/projects) | 100 | Global | No | +| Hashicorp Cloud Platform | [Hashicorp Virtual Networks](/hcp/docs/hcp/network) | 5 | Global | Yes | +| Hashicorp Cloud Platform | [Service Principals](/hcp/docs/hcp/iam/service-principal) | 5 | Global | Yes | +| Hashicorp Cloud Platform | [Keys per Service Principal](/hcp/docs/hcp/iam/service-principal#generate-a-service-principal-key) | 2 | Global | Yes | +| Hashicorp Cloud Platform | [HVN Routes](/hcp/docs/hcp/network/hvn-aws/routes) | 15 | Global | Yes | +| Hashicorp Cloud Platform | [Transit Gateway Attachments](/hcp/docs/hcp/network/hvn-aws/routes) | 10 | Global | Yes | +| Hashicorp Cloud Platform | [HVN Peering Connections](/hcp/docs/hcp/network/hvn-aws/hvn-peering) | 10 | Global | Yes | +| HCP Consul Dedicated | [Consul clusters](/hcp/docs/consul) | 6 | Global | Yes | +| HCP Vault Dedicated | [Vault clusters](/hcp/docs/vault) | 6 | Global | Yes | +| HCP Vault Dedicated | [Vault performance secondaries](/hcp/docs/vault/perf-replication) | 5 | Global | Yes | +| HCP Boundary | [Boundary clusters](/hcp/docs/boundary) | 1 per **project** | Global | No | + +Last Update: March 4, 2024 + + + +Global service quotas are enforced at the organization level. Therefore, you may not be able to create a new resource within a project, if the quota has +already been met by other resources in other projects in the organization. + + + + + +For HCP Vault Dedicated, the performance secondaries quota is dependent on the overall +Vault cluster quota. If requesting a modification for the secondary quota, +you may also need to request a modification for the overall quota. + + + +### Request additional resources + +Depending on your business need, you may need to increase your service quota values. Quota requests are handled by HashiCorp Support. + +Submit a request form with HashiCorp Support to request additional resources: [HCP Quota Request](https://support.hashicorp.com/hc/en-us/requests/new) +Include your organization ID in the request. diff --git a/content/hcp-docs/content/docs/hcp/api/index.mdx b/content/hcp-docs/content/docs/hcp/api/index.mdx new file mode 100644 index 0000000000..a8b3138ed9 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/api/index.mdx @@ -0,0 +1,90 @@ +--- +page_title: HCP API Overview +description: |- + Learn how to authenticate and interact with the HCP API. +--- + +# HCP API Overview + +This topic provides an overview for using HCP API, a RESTful API for managing platform resources. + +## Overview + +The following steps describe the procedure for interacting with the HCP API: + +1. Authenticate to HCP to generate an access token. +1. Use the access token to interact with your desired endpoints. + +The HCP API uses the following URLs: + +| Attribute | Description | Value | +| ---------------------- | ----------------------------------------------------------------------------- | ------------------------------- | +| API base URL | The base endpoint URL for the HCP API service | https://api.cloud.hashicorp.com | +| HCP Authentication URL | The URL to authenticate to HCP using a service principal client ID and secret | https://auth.idp.hashicorp.com | + + +## Prerequisites + +You must have the following information to authenticate and interact with the API: + +- **Client ID**: Service principal client ID used to authenticate to HCP. Refer to [Create a service principal](/hcp/docs/hcp/iam/service-principal#create-a-service-principal) for instruction on getting the ID. +- **Client secret**: Service principal client secret used to authenticate to HCP. Refer to [Create a service principal](/hcp/docs/hcp/iam/service-principal#create-a-service-principal) for instruction on getting the ID. +- **Organization ID**: HCP organization that contains the project of the resources you want to query. Refer to [HCP Organizations](/hcp/docs/hcp/admin/orgs) for information about creating and managing organizations. You can retrieve your organization ID from the **Organization settings** page. +- **Project ID**: HCP project that contains the resources you want to query. Refer to [HCP Projects](/hcp/docs/hcp/admin/projects) for information about creating and managing projects. You can retrieve your project ID from the **Project settings** page. + +Assign the IDs to the following variables: + +- Set the `HCP_CLIENT_ID` and `HCP_CLIENT_SECRET` environment variables to a [service principal key](/hcp/docs/hcp/iam/service-principal#create-a-service-principal). Packer uses these values to authenticate to HCP and generate an access key. +- Set the `ORGANIZATION_ID` environment variable to your HCP organization ID. +- Set the `PROJECT_ID` environment variable to your HCP project ID. +## Authenticate to HCP + +The HCP API requires an access token to authorize the request. + +Generate the access key with your HCP client ID and secret. + +```shell-session +$ curl --location "https://auth.idp.hashicorp.com/oauth2/token" \ +--header "Content-Type: application/x-www-form-urlencoded" \ +--data-urlencode "client_id=$HCP_CLIENT_ID" \ +--data-urlencode "client_secret=$HCP_CLIENT_SECRET" \ +--data-urlencode "grant_type=client_credentials" \ +--data-urlencode "audience=https://api.hashicorp.cloud" +``` + +In the following example response, the bearer access token is valid for one hour: + + + +```json +{ + "access_token": "eyJhbGciOiJSUzI1NiIsInR...", + "expires_in": 3600, + "token_type": "Bearer" +} +``` + + + +Set the `HCP_ACCESS_TOKEN` environment variable to the access token. This action makes it easier to reference the token when you are interacting with the API. + +```shell-session +$ export HCP_ACCESS_TOKEN= +``` + +## Interact with API + +After you have retrieved the access token, you are now ready to use the HCP API. You need to provide the access token in the `Authorization` header as a bearer token in each request. + +The following example shows you how to retrieve HCP Packer buckets in your HCP project. The workflow and authorization process is similar for other HCP operations. + +```shell-session +$ curl --location "https://api.cloud.hashicorp.com/packer/2021-04-30/organizations/$ORGANIZATION_ID/projects/$PROJECT_ID/images?pagination.page_size=10" \ +--header "authorization: Bearer $HCP_ACCESS_TOKEN" +``` + +Refer to the API documentation of the specific service owning resources for more information. + +## Versioning + +Versions are specific per service owning resources. Version names format is `YYYY-MM-DD`, for example, `2021-04-30`. diff --git a/content/hcp-docs/content/docs/hcp/audit-log/enable/cloudwatch.mdx b/content/hcp-docs/content/docs/hcp/audit-log/enable/cloudwatch.mdx new file mode 100644 index 0000000000..9e69a5ba27 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/audit-log/enable/cloudwatch.mdx @@ -0,0 +1,180 @@ +--- +page_title: Enable audit log streaming to AWS Cloudwatch +description: |- + Learn how to set up platform and product audit log streaming to AWS Cloudwatch. +--- + +# Enable audit log streaming to AWS Cloudwatch + +This page describes how to stream an organization’s HCP audit logs to AWS Cloudwatch, where you can review them. Enable audit log streaming from the HCP portal or use the [HCP Terraform provider's `hcp_log_streaming_destination` resource](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/log_streaming_destination). + +## Requirements + +@include '/audit-logs/requirements.mdx' + +### Terraform provider method + +To configure and enable audit logging streaming with Terraform instead of the HCP UI, the following software and provider versions are required. + +- Terraform v1.1.5 or later. For the best experience, we recommend using the latest release. +- [HashiCorp Cloud Platform (HCP) Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs) version 0.83.0 or higher. + +You must also configure the HCP provider to authenticate using an [organizational-level service principal](/hcp/docs/hcp/iam/service-principal#organization-level-service-principals) and service principal key. Refer to the [Authenticate with HCP guide in the Terraform registry](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/guides/auth) for more information. + +## Workflow + +You can enable audit log streaming from HCP to AWS Cloudwatch using the dedicated HCP workflow. You also have the option to create and manage your organization's infrastructure using the [HCP provider in the Terraform Registry](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs). + +Complete the following steps to enable audit log streaming. + + + + + +1. Outside of HCP, create an AWS IAM role with an attached policy for the audit logs. +1. Enable audit log streaming for the organization in HCP. +1. View the audit logs. + + + + + +1. Outside of HCP, create an AWS IAM role with an attached policy for the audit logs. +1. Enable audit log streaming with the following actions: + 1. Configure the [`hcp_log_streaming_destination`](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/log_streaming_destination) resource. + 1. Update your infrastructure with Terraform. +1. View the audit logs. + + + + +## Create an AWS IAM role with an attached policy for the audit logs + +To enable audit log streaming to AWS Cloudwatch, you must create an AWS IAM role in your AWS account that allows HashiCorp to stream audit logs to your account’s AWS Cloudwatch service. You can create a role and attach a policy manually in the AWS Console, or you can create the resources with Terraform. + +When you create an audit log streaming destination from [the HCP Portal](https://portal.cloud.hashicorp.com/), you are provided values for the `AWS ID` and `External ID` that you need for your AWS environment. To use these values to create the role from the AWS console, refer to the [instructions in the AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html). + +To create the role and policy using Terraform, add the following [AWS Terraform provider resources](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) to your configuration. + +```hcl +resource "aws_iam_role" "cloudwatch_hcp_audit_logs" { + name = "cloudwatch-hcp-audit-logs" + assume_role_policy = < + + + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization you want to stream audit logs from. +1. Click **Audit log streaming**. +1. Click **Create streaming destination**. +1. Select **AWS Cloudwatch**. +1. Create the AWS role and policy if you did not already do so. +1. Complete the required configuration fields: + - **Destination name**. This label appears in list of audit log streams for the HCP organization. + - **Role ARN**. The AWS resource identifier for the IAM role that authorizes HCP to stream audit logs to your Cloudwatch environment. + - **Region**. The AWS region where you store your Cloudwatch data. +1. Note the `Log group name`. Logs appear in AWS Cloudwatch under this name. You cannot edit this value. +1. Click **Test connection** to generate a test log that HCP sends to AWS Cloudwatch. +1. Click **Save**. + + + + + +You can also enable audit log streaming using Terraform and the [HCP provider in the Terraform Registry](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs). + +### Configure the `hcp_log_streaming_destination` resource + +For your HCP organization, configure the `hcp_log_streaming_destination` resource with the ARN of the role you created and your external ID. If the log group you specify does not exist, Terraform will create it when the first log streams to Cloudwatch. + +The following example provides a typical configuration for this resource: + +```hcl +resource "hcp_log_streaming_destination" "aws_cloudwatch" { + name = "" + cloudwatch = { + external_id = "hcp-log-stream" + region = "us-east-1" + role_arn = "arn:aws:iam::111111111:role/cloudwatch-hcp-audit-logs" + log_group_name = "/hashicorp/hcp/audit-logs/" + } +} +``` + +For more information about how to use role ARNs and log groups, refer to [IAM identifiers](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html) and [Working with log groups and streams](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html) in the AWS documentation. + +### Update your infrastructure with Terraform + +After you configure the resource, run `terraform apply` to update your deployment. To target the resource you just created, run the following command: + +```shell-session +$ terraform apply -target "aws_iam_role.cloudwatch_hcp_audit_logs" -target "aws_iam_role_policy.hcp_log_streaming_policy" -target "hcp_log_streaming_destination.aws_cloudwatch" +``` + +To view the audit logs, go to the Cloudwatch AWS service page. Click **Log groups**. Click **/hashicorp/hcp/audit-logs/**, and then click the log stream. Logs appear after you generate them. + + + + +## View the audit logs + +To view the audit logs in AWS, go to the Cloudwatch service page. Click **Log groups** and then select **/hashicorp/hcp/audit-logs/**. Audit log entries appear in Cloudwatch after they occur on the HCP platform. diff --git a/content/hcp-docs/content/docs/hcp/audit-log/enable/datadog.mdx b/content/hcp-docs/content/docs/hcp/audit-log/enable/datadog.mdx new file mode 100644 index 0000000000..5254382499 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/audit-log/enable/datadog.mdx @@ -0,0 +1,112 @@ +--- +page_title: Enable audit log streaming to Datadog +description: |- + Learn how to set up platform and product audit log streaming to Datadog. +--- + +# Enable audit log streaming to Datadog + +This page describes how to stream an organization’s HCP audit logs to Datadog, where you can review them. Enable audit log streaming from the HCP portal or use the [HCP Terraform provider `hcp_log_streaming_destination` resource](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/log_streaming_destination). + +## Requirements + +@include '/audit-logs/requirements.mdx' + +### Terraform provider method + +To configure and enable audit logging streaming with Terraform instead of the HCP UI, the following software and provider versions are required. + +- Terraform v1.1.5 or later. For the best experience, we recommend using the latest release. +- [HashiCorp Cloud Platform (HCP) Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs) version 0.86.0 or higher. + +You must also configure the HCP provider to authenticate using an [organizational-level service principal](/hcp/docs/hcp/iam/service-principal#organization-level-service-principals) and service principal key. Refer to the [Authenticate with HCP guide in the Terraform registry](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/guides/auth) for more information. + +## Workflow + +You can enable audit log streaming from HCP to Datadog using the dedicated HCP workflow. You also have the option to create and manage your organization's infrastructure using the [HCP provider in the Terraform Registry](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs). + +Complete the following steps to enable audit log streaming: + + + + + +1. Outside of HCP, create a Datadog API and optional application key. +1. Enable audit log streaming. +1. View the audit logs + + + + + +1. Create a Datadog API key and optional application key. +1. Enable audit log streaming with the following actions: + 1. Configure the [`hcp_log_streaming_destination`](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/log_streaming_destination). + 1. Update your infrastructure with Terraform. +1. View the audit logs. + + + + +## Create a Datadog API key + +To enable audit log streaming to Datadog, you must configure the Terraform provider with an API key and optional application key that identifies your unique Datadog organization. You can create these keys in the **Organization Settings** in Datadog. + +Refer to [API and Application Keys in the Datadog documentation](https://docs.datadoghq.com/account_management/api-app-keys/) for more information. + +## Enable audit log streaming + +To enable audit log streaming to Datadog, complete the following steps. + + + + + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization you want to stream audit logs from. +1. Click **Audit log streaming**. +1. Click **Create streaming destination**. +1. Select **Datadog**. +1. Complete the required configuration fields: + - **Destination name**. This label appears in list of audit log streams for the HCP organization. + - **API key**. The value of your Datadog API key. + - **Datadog site region**. This value must match your Datadog dashboard's region. +1. Click **Test connection** to generate a test log that HCP sends to Datadog. +1. Click **Save**. + + + + + +You can enable audit log streaming using Terraform and the [HCP provider in the Terraform Registry](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs). + +### Configure the `hcp_log_streaming_destination` resource + +For your HCP organization, configure the `hcp_log_streaming_destination` resource with the Datadog endpoint and your Datadog API key. The following example demonstrates this configuration: + +```hcl +resource "hcp_log_streaming_destination" "datadog" { + name = "" + datadog = { + endpoint = "https://http-intake.logs.datadoghq.com/api/v2/logs" + api_key = "" + } +} +``` + +This example configures the Datadog endpoint with a Datadog endpoint for the US. For more information about formatting the Datadog endpoint for your specific region, refer to [Access the Datadog site in the Datadog documentation](https://docs.datadoghq.com/getting_started/site/#access-the-datadog-site). + +#### Update your infrastructure with Terraform + +After you configure the resource, run terraform apply to update your deployment. To target the resource you just created, run the following command: + +```shell-session +$ terraform apply -target "hcp_log_streaming_destination.datadog" +``` + + + + +## View the audit logs + +To view audit logs, visit the Log Explorer on Datadog. You can search and filter logs from the `HCP` service. You can also apply filters by HCP organization or product. Logs appear after you generate them. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/audit-log/enable/splunk.mdx b/content/hcp-docs/content/docs/hcp/audit-log/enable/splunk.mdx new file mode 100644 index 0000000000..919e1d32fc --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/audit-log/enable/splunk.mdx @@ -0,0 +1,128 @@ +--- +page_title: Enable audit log streaming to Splunk Cloud +description: |- + Learn how to set up platform and product audit log streaming to Splunk Cloud. +--- + +# Enable audit log streaming to Splunk Cloud + +This page describes how to stream an organization’s HCP audit logs to Splunk Cloud, where you can review them. Enable audit log streaming from the HCP portal or use the [HCP Terraform provider `hcp_log_streaming_destination` resource](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/log_streaming_destination). + +## Requirements + +@include '/audit-logs/requirements.mdx' + +@include '/audit-logs/limitations/splunk.mdx' + +### Terraform provider method + +To configure and enable audit logging streaming with Terraform instead of the HCP UI, the following software and provider versions are also required. + +- Terraform v1.1.5 or later. For the best experience, we recommend using the latest release. +- [HashiCorp Cloud Platform (HCP) Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs) version 0.80.0 or higher. + +You must also configure the HCP provider to authenticate using an [organizational-level service principal](/hcp/docs/hcp/iam/service-principal#organization-level-service-principals) and service principal key. Refer to the [Authenticate with HCP guide in the Terraform registry](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/guides/auth) for more information. + +## Workflow + +You can enable audit log streaming from HCP to Splunk Cloud using the dedicated HCP workflow. You also have the option to create and manage your organization's infrastructure using the [HCP provider in the Terraform Registry](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs). + +Complete the following steps to enable audit log streaming: + + + + + +1. Outside of HCP, create an audit log index on Splunk Cloud. +1. Outside of HCP, create and retrieve an HTTP Event Collector (HEC) token. +1. Enable audit log streaming. + + + + + +1. Outside of HCP, create an audit log index on Splunk Cloud. +1. Outside of HCP, create and retrieve an HTTP Event Collector (HEC) token. +1. Enable audit log streaming by performing the following: + 1. Configure the [`hcp_log_streaming_destination`](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/log_streaming_destination). + 1. Update your infrastructure with Terraform. +1. View the audit logs. + + + + +## Create an audit log index + +To enable audit log streaming to Splunk Cloud, you must assign an index to stream the logs to. If you do not have an index already, create an event index named `hcp-audit-logs`. For more information, refer to [create events indexes in the Splunk documentation](https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Setupmultipleindexes#Create_events_indexes_2). + +## Create and retrieve an HTTP Event Collector (HEC) token + +Create an HEC token on Splunk Cloud. This token has the following requirements: + +- Give your HEC token a name, such as `hcp-log-stream`. +- Assign the `hcp-audit-logs` index you created to your token. + +For guidance on creating the token, refer to [Set up and use HTTP Event Collector in Splunk Web in the Splunk documentation](https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector). + + + +@include '/audit-logs/limitations/splunk.mdx' + + + +For guidance on retrieving the token value, refer to [Manage HTTP Event Collector tokens in the Splunk documentation](https://docs.splunk.com/Documentation/Splunk/latest/Data/HTTPEventCollectortokenmanagement#Manage_HTTP_Event_Collector_tokens_with_cURL). + +## Enable audit log streaming + +To enable audit log streaming to Splunk Cloud, complete the following steps. + + + + + +1. Sign in to the [HCP Portal](https://portal.cloud.hashicorp.com). +1. Select the organization you want to stream audit logs from. +1. Click **Audit log streaming**. +1. Click **Create streaming destination**. +1. Select **Splunk Cloud**. +1. Complete the required configuration fields: + - **Destination name**. This label appears in list of audit log streams for the HCP organization. + - **HTTP event collector (HEC) endpoint**. This endpoint has the following format: `https://http-inputs-.splunkcloud.com/services/collector/event`. Refer to [Send data to HTTP Event Collector in the Splunk documentation](https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector#Send_data_to_HTTP_Event_Collector_on_Splunk_Cloud_Platform) for more information. + - **Token**. The value of the HEC token. +1. Click **Test connection** to generate a test log that HCP sends to Splunk Cloud. +1. Click **Save**. + + + + + +### Configure the `hcp_log_streaming_destination` resource + +For your HCP organization, configure the `hcp_log_streaming_destination` resource with the HEC token and your Splunk endpoint. The following example demonstrates this configuration: + +```hcl +resource "hcp_log_streaming_destination" "my_splunk_cloud" { + name = "" + splunk_cloud = { + endpoint = ".splunkcloud.com/services/collector/event>" + token = "" + } +} +``` + +For more information about how to format the URI for the HEC’s endpoint, refer to [Send data to HTTP Event Collector in the Splunk documentation](https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector#Send_data_to_HTTP_Event_Collector_on_Splunk_Cloud_Platform). + +### Update your infrastructure with Terraform + +After you configure the resource, run `terraform apply` to update your deployment. To target the resource you just created, run the following command: + +```shell-session +$ terraform apply -target "hcp_log_streaming_destination.my_splunk_cloud" +``` + + + + +## View audit logs + +To view audit logs, search for the name of your audit log index on Splunk Cloud: `hcp-audit-logs`. Logs appear after you generate them. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/audit-log/index.mdx b/content/hcp-docs/content/docs/hcp/audit-log/index.mdx new file mode 100644 index 0000000000..3ef1a83f52 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/audit-log/index.mdx @@ -0,0 +1,59 @@ +--- +page_title: HCP audit log streaming +description: |- + This topic provides an overview of the HashiCorp Cloud Platform (HCP) audit log streaming, as well as usage instructions to set up platform and product audit log streaming to Cloudwatch, Datadog, and Splunk Cloud. +--- + +# HCP audit log streaming + +This topic details HashiCorp Cloud Platform’s (HCP) unified audit log streaming capabilities and the process to enable audit log streaming for HCP platform and product events. + +## Introduction + +Audit logs are a record of system events and corresponding identification data that are typically collected for security compliance measures or to aid in an incident response. In HCP, audit logs capture information about events for the entire [HCP organization](/hcp/docs/hcp/admin/orgs). + +HashiCorp Cloud Platform produces two types of audit logs that you can access: + +- _Platform audit logs_ track an organization’s interactions with the overall HCP platform, including when users sign-in and create projects. +- _Product audit logs_ track an organization’s interactions with the individual HCP products, such as for HCP Vault Secrets. + +You can stream an organization's audit logs to an external security information and event management (SIEM) provider, such as Splunk or AWS Cloudwatch, where you can review them. + +You can enable Audit log streaming through: + +1. the HCP Portal on an organization's Audit log streaming page +1. HashiCorp’s official HCP Terraform provider, [`hcp_log_streaming_destination`](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/log_streaming_destination) + +Previously, platform audit logs were not directly accessible by HCP users. Product audit logs are available for each HCP product separately. + +## Workflow + +The overall workflow to enable audit log streaming from HCP to an external security information and event management (SIEM) system consists of the following steps: + +1. Prepare destination and retrieve required credentials. This step varies slightly depending on your SIEM system. +1. Configure the audit log streaming destination in [the HCP Portal](https://portal.cloud.hashicorp.com/). You can also use Terraform and the [HashiCorp Cloud Platform (HCP) Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs) to complete this step. +1. Verify the connection. Use the connection test in the HCP UI to generate a log and send it to your SIEM system. Alternatively, take any action in HCP that generates an audit log, such as attempting a login to an HCP Boundary cluster. +1. View the audit log on your external SIEM system to confirm that streaming is properly configured. + +## Guidance + +The HCP documentation has several resources to help you stream audit logs from HCP. + +### Usage documentation + +- [Enable audit log streaming to AWS Cloudwatch](/hcp/docs/hcp/audit-log/enable/cloudwatch) +- [Enable audit log streaming to Datadog](/hcp/docs/hcp/audit-log/enable/datadog) +- [Enable audit log streaming to Splunk](/hcp/docs/hcp/audit-log/enable/splunk) + +### Reference documentation + +- [HCP audit log event and payload reference](/hcp/docs/hcp/audit-log/reference) + +## Constraints and limitations + +Be aware of the following technical constraints and limitations for HCP audit log streaming: + +- You must authenticate to HCP with an organization-level service principal. Authentication with a project-level service principal results in an error. +- When provided with a credential such as a token or API key that is not valid or does not have the correct permissions, HCP does not store logs that it is unable to stream. Logs begin to stream after you apply valid authentication credentials using the HCP UI or the Terraform provider. +- HCP does not process the audit log queue synchronously. It attempts to send logs for 7 days and performs an exponential backoff over that period by increasing the amount of time between attempts. +- Audit log redelivery is also subject to individual SIEM provider constraints. For example, Datadog accepts redelivery requests for logs within 18 hours of their timestamp. For more information about log event requirements, refer to [Custom log forwarding in the Datadog documentation](https://docs.datadoghq.com/logs/log_collection/?tab=host#custom-log-forwarding). \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/audit-log/reference.mdx b/content/hcp-docs/content/docs/hcp/audit-log/reference.mdx new file mode 100644 index 0000000000..8349d414c5 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/audit-log/reference.mdx @@ -0,0 +1,378 @@ +--- +page_title: HCP audit log streaming reference +description: |- + Learn about platform and product audit log streaming from HCP. +--- + +# HCP audit log streaming reference + +This page provides reference information for HashiCorp Cloud Platform (HCP) audit log streaming. The HashiCorp Cloud Platform (HCP) produces audit logs when an organization’s users interact with HCP platform services and individual products. + +## HCP platform events + +The HCP platform produces audit logs when the following events occur. + +- Create, read, update, and delete (CRUD) operations +- Identity events + - Sign up for new users + - Sign in for existing users + - MFA authentication success + - Successful login + - Password reset for accounts + - Revoke HCP Portal active sessions by user principal + - Remove user from HCP Organization + - User joins HCP Organization + - Sign in failure to HCP + - Bad password + - Bad MFA + - MFA enabled + - MFA disabled + - HCP Project deletion +- Role-based access control (RBAC) + - Add and delete users + - Manage user permissions + - View users and groups + - Manage service principals + - Manage groups + - View current billing status + - Create projects + - View projects + - View project resources + - Request Organization deletion + +## HCP Boundary events + +HCP Boundary generates audit logs when events related to the following Boundary resources occur. + +- Clusters +- Sessions +- Scopes +- Workers +- Credential Stores, Credential Libraries, Credentials +- Auth Methods, Roles, Managed Groups, Groups, Users, Accounts, Grants +- Host Catalogs, Host Sets, Host, Targets + +For more information, refer to [Auditing](/boundary/docs/concepts/auditing) in the Boundary documentation. + +## HCP Packer events + +HCP Packer generates audit logs when the following events occur. + +- Bucket events: + - Create bucket + - Delete bucket + - Update bucket + - Create bucket labels + - Update bucket labels +- Build events + - Created build + - Updated build +- Channel events + - Created channel + - Deleted channel + - Updated channel + - Assigned version to channel +- Version events + - Created version + - Completed version + - Revoked version + - Restored version + - Deleted version + - Scheduled version revocation + - Cancelled version revocation + +For a complete list of HCP Packet audit log events and metadata fields, refer to [HCP Packer audit log descriptions and metadata](/hcp/docs/packer/reference/audit-log). + +## HCP Vault Radar events + +HCP Vault Radar produces audit logs for the following user actions: + +| Entity | Actions | Action type | +| :---------------------- | :------------------------------------------------------------------------------------------- | :--------------------------------------: | +| Agent | Create agent
Delete agent | `CREATE`
`DELETE` | +| Agent (old station API) | Create agent
Delete agent | `CREATE`
`DELETE` | +| Data Source | Create Data Source
Update Data Source
Update Data Sources
Update Data Source feature | `CREATE`
`UPDATE`
`UPDATE`
`UPDATE` | +| Data Source | Create Data Source with Public API
Update Data Source with public API | `CREATE`
`UPDATE` | +| Data Source Group | Create Data Source Group
Update Data Source Group | `CREATE`
`UPDATE` | +| Secret Manager Location | Update Secret Manager Locations
Delete Secret Manager Location | `UPDATE`
`DELETE` | +| Secret | Secrets Copy Job | `CREATE` | +| Event | Update Event | `CREATE` | +| Event | Update Event with Public API | `CREATE` | +| Global Ignore Rules | Update Global Ignore Rules
Delete Global Ignore Rules | `UPDATE`
`DELETE` | +| Global Ignore Rules | Update Global Ignore Rules with Public API
Delete Global Ignore Rules with Public API | `UPDATE`
`DELETE` | +| Event Rule | Update Rules
Delete Rule | `UPDATE`
`DELETE` | +| Scan / Secret re-index | Schedule Scans | `CREATE` | +| Integration | Create Connection
Update Connection
Delete Connection | `CREATE`
`UPDATE`
`DELETE` | +| Integration | Create Subscription
Update Subscription
Delete Subscription | `CREATE`
`UPDATE`
`DELETE` | +| Subscription Filter | Create Subscription Filter
Delete Subscription Filter | `CREATE`
`DELETE` | +| Custom Expressions | Create Custom Expression
Update Custom Expression
Delete Custom Expression | `CREATE`
`UPDATE`
`DELETE` | +| Custom Expressions | Create Custom Expression with Public API
Update Custom Expression with Public API
Delete Custom Expression with Public API | `CREATE`
`UPDATE`
`DELETE` | +| Filter | Create Filter
Update Filter
Delete Filter | `CREATE`
`UPDATE`
`DELETE` | +| Remediation | Create Remediations
Update Remediation | `CREATE`
`UPDATE` | + +## HCP Vault Secrets events + +HCP Vault Secrets produces audit logs when the following events occur. + +- Create, update, delete applications +- Create, update, read, delete secrets +- Create, update, read, delete integrations + +## HCP Vault Dedicated events + +HCP Vault Dedicated produces audit logs when the following events occur. + +- Create, update, delete clusters +- Create, restore snapshots +- Add, delete plugins +- Lock, unlock cluster +- Fetch audit log +- Update version +- Host manager alive check +- Plugin registered check + +## HCP Waypoint events + +HCP Waypoint produces audit logs when the following events occur: + +- Action run started +- Create, read, update, delete action configs +- Create, read, update, delete, list add-ons +- Create, read, update, delete, list add-on definitions +- Create, read, update, delete, list application templates +- Create, read, update, delete, list applications +- Create, update, delete TFC configs + +## Payload examples + +Refer to the following sections for examples of the audit logs generated by product and platform events. + +- [Platform user authentication payload example](#platform-user-authentication-payload-example) +- [Platform project deletion payload example](#platform-project-deletetion-payload-example) +- [HCP Boundary event payload example](#hcp-boundary-event-payload-example) +- [HCP Vault Radar event payload example](#hcp-vault-radar-event-payload-example) + +### Platform user authentication payload example + +A user signing into HCP generates an audit log that contains the following information. + + + +```json +{ + "request_info": { + "http_verb": "GET", + "http_path": "/consent/complete" + }, + "principal": { + "user": { + "email": "jane.doe@company.com", + "full_name": "jane.doe@company.com" + } + }, + "authentication_info": { + "principal": { + "id": "e6132914-c9bf-4bea-854a-7520bb57bf7b", + "type": "PRINCIPAL_TYPE_USER", + "user": { + "id": "e6132914-c9bf-4bea-854a-7520bb57bf7b", + "email": "jane.doe@company.com", + "full_name": "jane.doe@company.com", + "subject": "e6132914-c9bf-4bea-854a-7520bb57bf7b" + } + } + }, + "metadata": { + "email": "jane.doe@company.com", + "event_type": "hcp_id_auth_success", + "ip": "69.323.323.201", + "message": "Authenticated successfully", + "timestamp": "2024-01-18 19:50:55 +0000 UTC", + "user_id": "e6132914-c9bf-4bea-854a-7520bb57bf7b" + }, + "operation_info": {}, + "description": "Authenticated successfully", + "action": "CREATE", + "status_code": "OK" +} +``` + + + +### Platform project deletion payload example + +Deleting a project from HCP generates an audit log that contains the following information. + + + +```json +{ + "request_info": { + "http_verb": "DELETE", + "http_path": "/resource-manager/2019-12-10/projects/c666065a-b21e-489c-8045-a79d3802fb64", + "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36", + "http_client_ip": "69.323.323.201" + }, + "principal": { + "user": { + "email": "jane.doe@company.com", + "full_name": "jane.doe@company.com" + } + }, + "authentication_info": { + "principal": { + "id": "e6132914-c9bf-4bea-854a-7520bb57bf7b", + "type": "PRINCIPAL_TYPE_USER", + "user": { + "id": "e6132914-c9bf-4bea-854a-7520bb57bf7b", + "email": "jane.doe@company.com", + "full_name": "jane.doe@company.com", + "identity_type": "EMAIL_PASSWORD", + "subject": "e6132914-c9bf-4bea-854a-7520bb57bf7b" + }, + "group_ids": [ + "iam.group:w7NkwCwBmdWH88f8mQqR" + ] + } + }, + "authorization_info": [ + { + "permissions": [ + "resource-manager.projects.update" + ], + "organization_id": "067acbc1-ed49-4dc2-9fcb-6b4aff713469", + "project_id": "c666065a-b21e-489c-8045-a79d3802fb64" + } + ], + "operation_info": { + "operation_id": "937d1354-6fc2-4cbf-8f94-03a1b82bcd8d" + }, + "description": "Deleted project", + "action": "DELETE", + "status_code": "OK" +} +``` + + + +### HCP Boundary event payload example + +Sign in attempts to the Boundary Admin UI through HCP generate an audit log that contains the following information. + + + +```json +{ + "cluster_id": "boundary-cluster-test", + "data": { + "auth": { + "auth_token_id": "", + "email": "[REDACTED]", + "grants_info": {}, + "name": "[REDACTED]", + "user_info": { + "id": "u_recovery" + } + }, + "id": "e_LM2Og3ZWhe", + "request": { + "details": { + "recursive": true, + "scope_id": "global" + } + }, + "request_info": { + "client_ip": "10.10.0.222", + "id": "gtraceid_6daQ2ZnwHEZwYNtAqmfW", + "method": "GET", + "path": "/v1/sessions?recursive=true&scope_id=global" + }, + "response": { + "details": {}, + "status_code": 200 + }, + "timestamp": "2024-01-18T19:48:28.819219731Z", + "type": "APIRequest", + "version": "v0.1" + }, + "datacontentype": "application/cloudevents", + "hcp_product": "boundary", + "id": "7YdpvNxqFn", + "organization_id": "067acbc1-ed49-4dc2-9fcb-6b4aff713469", + "project_id": "98a0dcc3-5473-4e4d-a28e-6c343c498530", + "serialized": "eyJpZCI6IjdZZHB2TnhxRm4iLCJzb3VyY2UiOiJodHRwczovL2hhc2hpY29ycC5jb20vYm91bmRhcnkvMGM3Nzg2MzEzYjE5L2NvbnRyb2xsZXIiLCJzcGVjdmVyc2lvbiI6IjEuMCIsInR5cGUiOiJhdWRpdCIsImRhdGEiOnsiaWQiOiJlX0xNMk9nM1pXaGUiLCJ2ZXJzaW9uIjoidjAuMSIsInR5cGUiOiJBUElSZXF1ZXN0IiwidGltZXN0YW1wIjoiMjAyNC0wMS0xOFQxOTo0ODoyOC44MTkyMTk3MzFaIiwicmVxdWVzdF9pbmZvIjp7ImlkIjoiZ3RyYWNlaWRfNmRhUTJabndIRVp3WU50QXFtZlciLCJtZXRob2QiOiJHRVQiLCJwYXRoIjoiL3YxL3Nlc3Npb25zP3JlY3Vyc2l2ZT10cnVlXHUwMDI2c2NvcGVfaWQ9Z2xvYmFsIiwiY2xpZW50X2lwIjoiMTAuMTAuMC4yMjIifSwiYXV0aCI6eyJhdXRoX3Rva2VuX2lkIjoiIiwidXNlcl9pbmZvIjp7ImlkIjoidV9yZWNvdmVyeSJ9LCJncmFudHNfaW5mbyI6e30sImVtYWlsIjoiW1JFREFDVEVEXSIsIm5hbWUiOiJbUkVEQUNURURdIn0sInJlcXVlc3QiOnsiZGV0YWlscyI6eyJzY29wZV9pZCI6Imdsb2JhbCIsInJlY3Vyc2l2ZSI6dHJ1ZX19LCJyZXNwb25zZSI6eyJzdGF0dXNfY29kZSI6MjAwLCJkZXRhaWxzIjp7fX19LCJkYXRhY29udGVudHlwZSI6ImFwcGxpY2F0aW9uL2Nsb3VkZXZlbnRzIiwidGltZSI6IjIwMjQtMDEtMThUMTk6NDg6MjguODE5MjM1NDY0WiJ9Cg", + "serialized_hmac": "hmac-sha256:u2pUHrsbNO2X6cs6PRwhdzgyyF0xUW8FIv8PbNG1E-c", + "source": "https://hashicorp.com/boundary/0c7786313b19/controller", + "specversion": "1.0", + "time": "2024-01-18T19:48:28.819235464Z", + "type": "audit" +} +``` + + + +### HCP Vault Radar event payload example + +Creating a subscription filter for HCP Vault Radar generates an audit log that contains the following information. + + + +```json +{ + "id": "42b9b6f3-e87e-4855-88f9-e0ddc2a12db7", + "timestamp": "2025-05-02T20:21:02.423Z", + "stream": { + "organization_id": "022910a1-e843-40d0-b754-b471480cdd5a", + "project_id": "b38f0dbb-f921-4913-abcf-bc68f67e72d3", + "topic": "hashicorp.platform.audit" + }, + "control_plane_event": { + "request_info": { + "http_verb": "POST", + "http_path": "/2023-05-01/vault-radar/projects/b38f0dbb-f921-4913-abcf-bc68f67e72d3/api/integrations/subscriptions/8e6f2daa-505c-47b9-8c18-cbcbfba80d35/filters/294a02bd-6ba5-45e6-aa0a-4a369f5eef57", + "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Safari/537.36", + "http_client_ip": "70.187.230.239" + }, + "authentication_info": { + "principal": { + "id": "[REDACTED]", + "type": "PRINCIPAL_TYPE_USER", + "user": { + "id": "[REDACTED]", + "email": "john.doe@hashicorp.com", + "full_name": "john.doe@hashicorp.com", + "identity_type": "EMAIL_PASSWORD", + "identity_types": [], + "subject": "[REDACTED]", + "scim_synchronized": false + }, + "group_ids": [] + }, + "service_principal_delegation_chain": [] + }, + "authorization_info": [ + { + "permissions": [ + "vault-radar.integrations.create" + ], + "organization_id": "022910a1-e843-40d0-b754-b471480cdd5a", + "project_id": "b38f0dbb-f921-4913-abcf-bc68f67e72d3", + "resource_id": "b38f0dbb-f921-4913-abcf-bc68f67e72d3" + } + ], + "metadata": { + "action_success": true, + "correlation_id": "d55d4c67-4fcd-4ce9-b7ea-1f5e286df2d2", + "service_name": "Vault Radar" + }, + "operation_info": { + "operation_id": "" + }, + "description": "Radar - Create Subscription Filter", + "action": "CREATE", + "status_code": "OK" + } +} +``` + + \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/create-account.mdx b/content/hcp-docs/content/docs/hcp/create-account.mdx new file mode 100644 index 0000000000..6bcb6dc560 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/create-account.mdx @@ -0,0 +1,42 @@ +--- +page_title: HCP Account +description: |- + Create a HashiCorp Cloud Platform account and manage your account settings. +--- + +# HCP Account + +This page explains how to create an account in HashiCorp Cloud Platform (HCP) and manage your account settings. + +## Account geography + +To meet data residency requirements, HCP requires separate accounts for the global and European geographies. + +To create a global HCP account, sign up on [the HCP portal](https://portal.cloud.hashicorp.com/). To an HCP Europe account, sign up on [the HCP Europe portal](https://portal.cloud.eu.hashicorp.com/). + +For more information, refer to [HCP Europe](/hcp/docs/hcp/europe). + +## Create an HCP Account + +You can create an HCP account using one of the following methods: + +- Email and password +- [Single Sign-On](/hcp/docs/hcp/iam/sso) through GitHub or an identity provider configured for an [HCP organization](/hcp/docs/hcp/admin/orgs) + +You can also use your HCP credentials to sign in to the following HashiCorp products and educational resources: + +- [HCP Terraform](https://cloud.hashicorp.com/products/terraform): Refer to the [HCP Terraform documentation](/terraform/cloud-docs/users-teams-organizations/users#log-in-with-your-hcp-account) to learn how to log in with your HCP credentials. +- [Tutorials](/hcp/tutorials): Step-by-step tutorials to learn HCP and other HashiCorp products +- [Discuss](https://discuss.hashicorp.com): Discussion forums where you can ask questions and product announcements +- [HashiConf virtual events](https://hashiconf.com): Resources for HashiCorp's bi-annual conference + +## Account Settings + +You can review your account settings, change your password, manage email preferences, and enable multi-factor authentication (MFA) from your account settings screen. +Choose **Account settings** from your user profile menu to review basic information about your account. + +Click **Security** in the sidebar to access additional features. You can perform the following actions on the **Security** screen: + +- Click **Send password reset email** to initiate the process of changing your password. Follow the instructions in the email to proceed. + +- Click **Enable MFA** to begin setting up MFA. Refer to the [Multi-factor Authentication](/hcp/docs/hcp/security/mfa) documentation for next steps. diff --git a/content/hcp-docs/content/docs/hcp/europe.mdx b/content/hcp-docs/content/docs/hcp/europe.mdx new file mode 100644 index 0000000000..d0f6150bfd --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/europe.mdx @@ -0,0 +1,86 @@ +--- +page_title: HCP Europe +description: |- + HashiCorp Cloud Platform (HCP) has a dedicated portal for businesses in Europe to meet data residency requirements. Learn about the HCP services available in Europe. +--- + +# HCP Europe + +With HCP Europe, your resources are hosted, managed, and billed separately to meet European data residency requirements. + +For more information about using HCP Terraform in the European region, refer to [Use HCP Terraform in Europe](/terraform/cloud-docs/europe). + +## HCP Europe portal + +[The HCP Europe portal](https://portal.cloud.eu.hashicorp.com/) is available at a distinct URL: + +```plaintext +https://portal.cloud.eu.hashicorp.com/ +``` + +Use the portal to sign in to your HCP Europe account. Then you can access and manage your HCP Europe resources from a compatible web browser. + +### Change HCP geography + +When you visit [the HCP Portal](https://portal.cloud.eu.hashicorp.com), you have the option to change your geography before you sign in to your account. + +Under **Change geography**, use the selector to switch between the global and Europe HCP portals. + +## Product availability + +The following table lists the current availability of HCP services in each region. + +| Service | Global availability | Europe availability | +| :-------------- | :-----------------: | :-----------------: | +| Boundary | ✅ | ❌ | +| Packer | ✅ | ❌ | +| Terraform | ✅ | ✅ | +| Vagrant | ✅ | ❌ | +| Vault Dedicated | ✅ | ❌ | +| Vault Radar | ✅ | ❌ | +| Waypoint | ✅ | ✅ | + +We plan to add support for more services to HCP Europe over time, until all services are available in both regions. + +## Benefits + +HCP Europe provides the following benefits to managing your cloud deployments: + +- **Isolated HCP footprints**: A unique HCP footprint specific to the European geography ensures complete isolation from the existing global HCP footprint. +- **Data residency**: User-generated data remains within European borders so that you can ensure compliance with data sovereignty regulations and reduce potential risks associated with cross-border data transfers. +- **Enhanced user control**: Organizations have granular control over user accounts within Europe to manage access, permissions, and account activities more precisely. + +HCP Europe provides the same reliability and uninterrupted access to data as the global HCP platform. To provide 24-hour support, data from HCP Europe may be accessed from members of our support team in other regions. + +To learn more about our data governance policies, visit the [HashiCorp EU Trust Center](https://www.hashicorp.com/en/trust/eu). + +## Accounts + +You must create a new HCP account and organization to use with HCP Europe. You can use the same email address to sign up for your HCP Europe account that you used for your global account, but you cannot connect or migrate existing HCP accounts from the global geography to HCP Europe. + +European accounts combine HCP and HCP Terraform (formerly Terraform Cloud) administration by default. When you create groups or invite users in HCP Terraform, the HCP platform automatically manages these resources. Similarly, when you change permissions in the HCP platform, it also affects HCP Terraform accounts. + +## Billing + +HCP Europe bills resources separately from the global HCP resources, even if you already have an existing payment method set up for the global HCP platform. + +HCP Europe supports the following types of accounts: + +- Trial accounts +- Contract plans + +HCP Europe does not support pay-as-you-go billing at this time. + +## Constraints and limitations + +HCP Europe has the following constraints: + +- You can only delete an organization through HCP Support. +- Each HCP Europe organization is limited to 100 projects. +- You must delete all workspaces within a project before you can delete a project. +- You cannot use European HCP service principals with the HCP Terraform APIs. +- HCP Europe does not support the HCP CLI at this time. +- HCP Europe organizations do not support team notifications for groups. +- If you remove and then re-add a user to your organization, it may cause unexpected permissions behavior. Contact HCP Support for more information. + +To learn more about HCP Terraform specific limitations, refer to [Use HCP Terraform in Europe](/terraform/cloud-docs/europe#constraints-and-limitations). diff --git a/content/hcp-docs/content/docs/hcp/iam/access-management.mdx b/content/hcp-docs/content/docs/hcp/iam/access-management.mdx new file mode 100644 index 0000000000..f1edd9e209 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/access-management.mdx @@ -0,0 +1,283 @@ +--- +page_title: Access Management +description: |- + This topic describes how access management functions within the HashiCorp Cloud Platform (HCP) and how leverage it. +--- + +# Access Management + +This topic describes HCP's access management features. You can set roles and permissions at either the _organization level_ , _project level_ or _resource level_ to secure access to HCP resources. + +## Roles & Permissions + +@include '/hcp-administration/permission-intro.mdx' + +### Organization + +The following tables describe role permissions assigned at the organization level. + + + + +| HCP Organization Permissions | Owner | Admin | Contributor | Viewer | Browser | No role | +| --------------------------------- | :-----: | :------: | :---------: | :------: | :------: | :------: | +| Add and delete users | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| Manage user permissions | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| View users | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| View groups | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | +| Manage service principals | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| Manage groups | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| View current billing status | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Create projects | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | +| View projects | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | +| View project resources | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | +| Request Organization deletion | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | +| Manage SSO configuration | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | +| Manage billing resources | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | + + + + + +| HCP organization permissions | Organization IAM policies administrator | Project Creator | +| --------------------------------- | :-------------------------------------: | :-------------: | +| Add and delete users | ❌ | ❌ | +| Manage user permissions | ✅ | ❌ | +| View users | ✅ | ✅ | +| View groups | ✅ | ✅ | +| Manage service principals | ❌ | ❌ | +| Manage groups | ❌ | ❌ | +| View current billing status | ❌ | ❌ | +| Create projects | ❌ | ✅ | +| View projects | ✅ | ❌ | +| View project resources | ❌ | ❌ | +| Request organization deletion | ❌ | ❌ | +| Manage SSO and SCIM configuration | ❌ | ❌ | +| Manage billing resources | ❌ | ❌ | + + + + + +| HCP Organization permissions | Billing administrator | +| --------------------------------- | :-------------------: | +| Add and delete users | ❌ | +| Manage user permissions | ❌ | +| View users | ✅ | +| View groups | ✅ | +| Manage service principals | ❌ | +| Manage groups | ❌ | +| View current billing status | ✅ | +| Create projects | ❌ | +| View projects | ✅ | +| View project resources | ❌ | +| Request organization deletion | ❌ | +| Manage SSO and SCIM configuration | ❌ | +| Manage billing resources | ✅ | + + + + + + +| HCP organization permissions | Group administrator | SSO administrator | +| --------------------------------- | :-----------------: | :---------------: | +| Add and delete users | ❌ | ❌ | +| Manage user permissions | ❌ | ❌ | +| View users | ✅ | ✅ | +| View groups | ✅ | ✅ | +| Manage service principals | ❌ | ❌ | +| Manage groups | ✅ | ❌ | +| View current billing status | ❌ | ❌ | +| Create projects | ❌ | ❌ | +| View projects | ❌ | ❌ | +| View project resources | ❌ | ❌ | +| Request organization deletion | ❌ | ❌ | +| Manage SSO and SCIM configuration | ❌ | ✅ | +| Manage billing resources | ❌ | ❌ | + + + + + +| HCP Terraform organization permission | Admin | Contributor | Viewer | +|---------------------------------------|:-----:|:-----------:|:------:| +| Owner-level permissions | ✅ | ❌ | ❌ | +| View all projects | ✅ | ✅ | ✅ | +| Manage all projects | ✅ | ✅ | ❌ | +| View all workspaces | ✅ | ✅ | ✅ | +| Manage all workspaces | ✅ | ✅ | ❌ | +| Manage organization access | ✅ | ❌ | ❌ | +| Include secret groups | ✅ | ❌ | ❌ | +| Manage policies | ✅ | ❌ | ❌ | +| Manage policy overrides | ✅ | ❌ | ❌ | +| Manage run tasks | ✅ | ❌ | ❌ | +| Manage version control settings | ✅ | ❌ | ❌ | +| Manage agent pools | ✅ | ❌ | ❌ | +| Manage private registry modules | ✅ | ❌ | ❌ | +| Manage private registry providers | ✅ | ❌ | ❌ | +| Manage public registry modules | ✅ | ❌ | ❌ | +| Manage public registry providers | ✅ | ❌ | ❌ | +| Members can manage API tokens | ✅ | ✅ | ❌ | + +To learn more about each permission, refer to [HCP Terraform organization permissions](/terraform/cloud-docs/users-teams-organizations/permissions/organization). + + + + + +A user can be a part of an organization with no roles assigned directly to them through the [SSO default role settings](/hcp/docs/hcp/admin/iam/sso) or IAM settings. To enforce least-privileged access, new users will have a limited experience within the platform until an Admin assigns either an organization or project role to the user. + +### Project + +The following tables describe role permissions scope to the project level. + + + + +| HCP project permissions | Owner | Admin | Contributor | Viewer | Browser | +| ----------------------------------- | :-----: | :-----: | :---------: | :------: | :------: | +| View project | ✅ | ✅ | ✅ | ✅ | ✅ | +| View project resources | ✅ | ✅ | ✅ | ✅ | ❌ | +| Edit project permissions | ✅ | ✅ | ❌ | ❌ | ❌ | +| Delete project | ✅ | ✅ | ❌ | ❌ | ❌ | +| Create and delete project resources | ✅ | ✅ | ✅ | ❌ | ❌ | +| Manage project service principals | ✅ | ✅ | ❌ | ❌ | ❌ | + + + + + +| HCP project permissions | App manager | App secrets reader | Integration manager | Integration reader | +| ------------------------------------------------- | :---------: | :----------------- | :-----------------: | :----------------- | +| View project | ✅ | ✅ | ❌ | ❌ | +| View project resources | ✅ | ✅ | ❌ | ❌ | +| Edit project permissions | ❌ | ❌ | ❌ | ❌ | +| Delete project | ❌ | ❌ | ❌ | ❌ | +| Create and delete project Vault Secrets resources | ✅ | ❌ | ❌ | ❌ | +| Manage project service principals | ❌ | ❌ | ❌ | ❌ | +| Create sync integrations | ❌ | ❌ | ✅ | ❌ | +| Manage sync integrations | ❌ | ❌ | ✅ | ❌ | +| Delete sync integrations | ❌ | ❌ | ✅ | ❌ | +| Connect sync integrations | ❌ | ❌ | ✅ | ❌ | +| Disconnect sync integrations | ❌ | ❌ | ✅ | ❌ | +| Get integrations | ❌ | ❌ | ✅ | ✅ | +| List integrations | ❌ | ❌ | ✅ | ✅ | +| Create integrations | ✅ | ❌ | ✅ | ❌ | +| Update integrations | ✅ | ❌ | ✅ | ❌ | +| Delete integrations | ✅ | ❌ | ✅ | ❌ | +| Read rotating secrets | ✅ | ✅ | ✅ | ❌ | +| Create rotating secrets | ❌ | ❌ | ✅ | ❌ | +| Edit rotating secrets | ✅ | ❌ | ✅ | ❌ | +| Delete rotating secrets | ✅ | ❌ | ✅ | ❌ | +| Generate dynamic secrets credentials | ✅ | ✅ | ✅ | ❌ | +| Create dynamic secrets | ❌ | ❌ | ✅ | ❌ | +| Edit dynamic secrets | ❌ | ❌ | ✅ | ❌ | +| Delete dynamic secrets | ✅ | ❌ | ✅ | ❌ | + +From the IAM tab in the HCP UI, you can assign the App Manager, App Secrets Reader, Integration Manager, or Integration Reader roles to Users, Service Principals, and Groups at the Project level. + +Refer to the [HCP Vault Secrets](/hcp/docs/vault-secrets/permissions) +documentation for more details. + + + + + +| HCP project permissions | Project IAM policies administrator | +| ----------------------------------- | :--------------------------------: | +| View project | ✅ | +| View project resources | ❌ | +| Edit project permissions | ✅ | +| Delete project | ❌ | +| Create and delete project resources | ❌ | +| Manage project service principals | ❌ | +| Manage group role for project | ✅ | + + + + + + +| HCP Terraform permission name | Admin role | Contributor role | Viewer | +|--------------------------------|:----------:|:---------------:|:------:| +| Read project | ✅ | ✅ | ✅ | +| Update project | ✅ | ❌ | ❌ | +| Delete project | ✅ | ❌ | ❌ | +| Create workspaces | ✅ | ❌ | ❌ | +| Move workspaces | ✅ | ❌ | ❌ | +| Delete workspaces | ✅ | ❌ | ❌ | +| Manage teams | ✅ | ❌ | ❌ | +| Apply runs | ✅ | ❌ | ❌ | +| Plan runs | ✅ | ❌ | ❌ | +| Read runs | ✅ | ❌ | ✅ | +| Read and write variables | ✅ | ❌ | ❌ | +| Read variables | ✅ | ❌ | ✅ | +| Manage variable sets | ✅ | ❌ | ❌ | +| Read variable sets | ✅ | ❌ | ❌ | +| Read and write state | ✅ | ❌ | ❌ | +| Read state | ✅ | ❌ | ✅ | +| Download Sentinel mocks | ✅ | ❌ | ❌ | +| Lock/unlock workspaces | ✅ | ❌ | ❌ | +| Manage workspace Run Tasks | ✅ | ❌ | ❌ | + +To learn more about each permission, refer to [HCP Terraform project permissions](/terraform/cloud-docs/users-teams-organizations/permissions/project). + + + + + +#### Assign a project role + +@include '/hcp-administration/assign-project-role.mdx' + +# Role Names and Role IDs + +To interact with the HCP Access Management system using the [HCP Terraform provider](https://registry.terraform.io/providers/hashicorp/hcp/latest) or public APIs, you must properly format the role IDs you reference.The table lists role names and the formatting of their Role IDs. + + + + +| Role name | Role ID | +| ----------- | :------------------------------: | +| Admin | `roles/admin` | +| Contributor | `roles/contributor` | +| Viewer | `roles/viewer` | +| Browser | `roles/resource-manager.browser` | + + + + + +@include '/hcp-administration/role-name-and-id/vault-secrets.mdx' + + + + + +| Role name | Role ID | +| --------------------------------------- | :-------------------------------------------------: | +| Project IAM policies administrator | `roles/resource-manager.project-iam-policies-admin` | +| Organization IAM policies administrator | `roles/resource-manager.org-iam-policies-admin` | +| Project Creator | `roles/resource-manager.project-creator` | + + + + + +| Role Name | Role ID | +| --------------------- | :---------------------------: | +| Billing Administrator | `roles/billing.billing-admin` | + + + + + +| Role Name | Role ID | +| ------------------- | :---------------------: | +| Group Administrator | `roles/iam.group-admin` | +| SSO Administrator | `roles/iam.sso-admin` | + + + + diff --git a/content/hcp-docs/content/docs/hcp/iam/groups.mdx b/content/hcp-docs/content/docs/hcp/iam/groups.mdx new file mode 100644 index 0000000000..7316bb5135 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/groups.mdx @@ -0,0 +1,95 @@ +--- +page_title: Groups +description: |- + This topic describes how to manage HashiCorp Cloud Platform (HCP) users within groups. +--- + +# Groups + +This topic describes how to create and manage groups of users in HashiCorp Cloud Platform (HCP). A group is a set of one or more user identities that you want to manage as a single identity. + +## Introduction + +In HCP, groups enable you to manage permissions for multiple users in a consistent manner. You can assign groups to roles and associate them with one or more projects, just as you would for individual user identities. This approach enables you to logically manage users and permissions at scale. + +Each group can have a different role for each project it is associated with. The following example illustrates this capability: + +1. An organization has one user assigned the `Admin` IAM role and three users assigned the `Viewer` role. +1. The admin creates a group named `engineers` and adds the viewers as members. +1. The admin assigns the `engineers` group to three projects in the HCP organization, and assigns the group to the following projects and roles: + - `Admin` role in the _Development_ project + - `Contributor` role in the _ Staging_ project + - `Viewer` role in the _Production_ project + +As a result, members of the `engineers` group would be able to perform administrative actions in the _Development_ project, create and modify resources in the _Staging_ project, and continue to read resources in the _Production_ project. + +![Development Project config for Group Role](/img/docs/hcp-core/ui-project-groups-development.png) +![Staging Project config for Group Role](/img/docs/hcp-core/ui-project-groups-staging.png) +![Production Project config for Group Role](/img/docs/hcp-core/ui-project-groups-production.png) + +To learn more about user permissions, refer to [User permissions](/hcp/docs/hcp/iam/users#user-permissions). + +## Requirements + +You must have [admin permissions for your organization](/hcp/docs/hcp/iam/users#user-permissions) to create and manage groups and users. + +## Create a group + +1. Log into [the HCP portal](https://portal.cloud.hashicorp.com) and choose your organization. +1. Click **Access control (IAM)** to view a list of users. +1. Click **Groups**. +1. Click **Create group**. +1. Enter a group name and description. Group names must be unique across the entire organization. +1. Click **Create group**. +1. Click **Add group members** to add users to the group. +1. Choose users to add to the group and click **Add group members**. If you expect a specific user that does not appear in the list, verify that the user has joined the HCP organization. + +## Assign group roles for members + +Once members have been added to your group, you can manage their group role. Within a group, each role has certain permissions that determine who can view and manage the group. + +### Group role permissions +The following table describes role permissions for each group role. + + +| Group Permissions | Member | Manager | +| ----------------------------- | :------: | :------: | +| View group | ✅ | ✅ | +| Manage group details | ❌ | ✅ | +| Manage group membership | ❌ | ✅ | +| Manage group roles | ❌ | ✅ | + + + +All members with more permissive [organization roles](/hcp/docs/hcp/iam/users#user-permissions) are inherently group managers regardless of exposed roles. + + + +### Edit group roles +1. Log into [HCP Portal](https://portal.cloud.hashicorp.com/) and choose your organization. +1. Click **Access Control (IAM).** +1. Click **Groups.** +1. Click into the group you want to edit group roles in. +1. Under members, choose the user's role you want to edit and click **Edit role** from the drop-down. +![UI HCP PORTAL EDIT MEMBER ROLE](/img/docs/hcp-core/ui-hcp-portal-group-members-edit-role.png) +1. Choose the group role for that user and click **Save.** + +## Assign a project and role + +1. From [the HCP portal](https://portal.cloud.hashicorp.com) choose the project you want to assign a group to. +1. From the project dashboard, click **Access control (IAM)**. +1. Click **Add new assignment**. +1. Search for and then select the groups you want to assign projects and roles to. +1. Select a service and role to set the group's permissions for the project. +1. Click **Save**. + +## Role precedence + +When a user is a member of a group that has permission conflicts with the user's permissions in the organization, HCP enforces the most elevated role assigned to the user. + +For example, a user assigned the admin role for an organization is an admin for all projects, regardless of the project roles assigned to the groups they are members of. + +When a user is in multiple groups with different roles for a project, then HCP enforces the highest role. + +When a user is not a member of any group, then HCP enforces their organization role for the +project. diff --git a/content/hcp-docs/content/docs/hcp/iam/mfa.mdx b/content/hcp-docs/content/docs/hcp/iam/mfa.mdx new file mode 100644 index 0000000000..2749d1ab1b --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/mfa.mdx @@ -0,0 +1,69 @@ +--- +page_title: Multi-Factor Authentication +description: |- + Introduction to Multi-Factor Authentication for HCP. +--- + +# Introduction to Multi-Factor Authentication + +HashiCorp Cloud Platform (HCP) allows organizations to sign in using several different sign-in methods, including email-based. Other sign-in methods include GitHub-based and [Single Sign-On through Okta](/hcp/docs/hcp/iam/sso). To help secure your account and your company's data, HCP offers **Multi-Factor Authentication (MFA)** with the email-based sign-in method. This optional feature is also commonly known as two-factor authentication or 2FA. + +-> **Note:** The MFA option is currently not offered with GitHub-based nor SSO sign-in methods. + +With MFA, you will need a password (credential), and an authenticator application downloaded to your phone. HCP offers one method of MFA called **one-time password (OTP)**. An OTP is a sequence of numbers that are generated by an authenticator application. If you lose your device, you can use the recovery code provided during the setup process. Make sure to record the recovery code and save it to a secure location. + +~> Note that all changes made to MFA will affect your access to HashiCorp Learn, Discuss, Events, and Certificates sites since the same account is used to access those sites. + +## Enabling MFA + +To enable MFA within your HCP portal, navigate to the top right corner of the drop-down menu where your user profile photo is located and select **Account Settings**. + +![Account Settings](/img/docs/account-settings.png 'Account Settings') + +In **Account settings**, select the **Security** tab. + +![Security Tab](/img/docs/security-tab.png 'Security Tab') + +If you did not initially enroll in MFA when you created your HCP account, the **Status** would display **Not enabled**. Click **Enable MFA** to begin the MFA setup process. The setup process may take up to 10 minutes to complete. + +After you clicked **Enable MFA**, the following screen appears. + +![Enable MFA](/img/docs/updated-enable-mfa.png 'Enable MFA') + +Click **Continue**. The MFA enablement setup process will take you out of the HCP portal and back to the sign-in screen widget. Please allow some time for the page to process and reload before it takes you back into [the HCP Portal](https://portal.cloud.hashicorp.com/). Once you are authenticated back into the portal, you will see the One-Time Password screen. You may use the Google Authenticator or a similar authenticator application to scan the QR code. Once the code is generated from your authenticator application, enter the 6-digit code to move on to the next step. In the event that you do not have a device on hand to scan the QR code, you can click the text code link, where it will automatically copy a string for you. Manually enter the string code into your authentication application and click **Verify**. + +![One-Time Password](/img/docs/otp.png 'One-Time Password') + +Once you successfully verified the code, a recovery code is provided; record the recovery code and safely store in a secure location. A recovery code provides a method to authenticate back into [the HCP Portal](https://portal.cloud.hashicorp.com/) if you do not have access to a device. + +![Recovery Code](/img/docs/recovery-code.png 'Recovery Code') + +Once you have secured your recovery code and finalized the MFA setup process, the Status will change to **MFA enabled**, confirming that you have successfully enabled MFA. + +## Disabling MFA + +Disabling MFA requires that you have your OTP or recovery code on hand. Alternatively, if you have an active HCP session and you enabled **Remember this browser**, you can also disable MFA. If you do not have either one of these methods in place, you will not be able to perform the manual steps described below to remove MFA, in which case, you must contact support to perform a hard reset to remove MFA. + +To disable MFA, select the "Disable MFA" link under the MFA status. From the **Multi-factor Authentication (MFA)** section, click **Disable MFA**. + +![Disable MFA](/img/docs/disableMFA.png 'Disable MFA') + +You will be required to log back in using your email-based login credentials. From there, you will be promted to enter your 6-digit OTP code from your authentication app. + +![OTP](/img/docs/documentation.png 'OTP') + +One you have entered your 6-digit OTP code, the Disable MFA page opens where you will need to manually enter the word **DISABLE** to confirm that you want to remove MFA from your account. Please allow some time for the page to process and reload before it takes you back into [the HCP Portal](https://portal.cloud.hashicorp.com/). + +![Disable MFA](/img/docs/updated-disable-mfa.png 'Disable MFA') + +~> Note that all changes made to MFA will affect your access to HashiCorp Learn, Discuss, Events, and Certificates sites since the same account is used to access those sites. + +## Troubleshooting + +If you run into issues where you could not verify your 6-digit code, it's likely that the system is temporarily down. + +![QR Error Code](/img/docs/qr-code-error.png 'QR Error code') + +To mitigate this issue, try re-generating a new OTP. If you misplaced your device or unable to access your device, or need to sign in without one, use the recovery code that you securely saved to sign back into [the HCP Portal](https://portal.cloud.hashicorp.com/). Note that there may be a five-minute time-out period if you did not complete the setup process within the timeframe given to complete the MFA setup process. + +If errors persist or you have lost your recovery code, please contact [Support](https://support.hashicorp.com/hc/en-us) for further assistance. diff --git a/content/hcp-docs/content/docs/hcp/iam/service-principal/index.mdx b/content/hcp-docs/content/docs/hcp/iam/service-principal/index.mdx new file mode 100644 index 0000000000..af985294d3 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/service-principal/index.mdx @@ -0,0 +1,253 @@ +--- +page_title: Service principals +description: |- + Service principals are identities used for authenticating service requests from applications, hosted services, and automated tools. +--- + +# Service principals + +This topic describes the steps to use service principals to authenticate service requests from applications, hosted services, and automated tools on the HashiCorp Cloud Platform (HCP). + +Service principals can only be associated with one organization, and you can assign role-based permissions to service principals so that they can perform specific actions in HCP. Refer to the [user permissions](/hcp/docs/hcp/iam/users#user-permissions) for details. + +## Types of service principals + +Create service principals as _organization-level_ or _project-level_. This section gives an overview of both types, explains how to create each type, and how to delete each type. + +To use service principals, you must also create a corresponding service principal key. Refer to [service principal keys](/hcp/docs/hcp/iam/service-principal/key) for more information. + +### Organization-level service principals + +_Organization-level service principals_ are scoped to interact with every resource and project within an organization. For example, an organization-level service principal with [viewer permissions](/hcp/docs/hcp/iam/users#user-permissions) can view all resources across all projects within an organization. + +### Project-level service principals +_Project-level_ service principals are designed to interact with resources within a specific project in an organization. By default, they can only access resources in the project where they were created. However, these service principals can be assigned roles in additional projects beyond their original scope. + + When a project-level service principal is assigned a role in another project, it can interact with the resources in that project according to the permissions granted by the assigned role. The service principal will retain its default permissions in its original project while gaining the new permissions in the additional project. + + **Example:** + + A service principal created with viewer permissions in Project A can be assigned contributor permissions in Project B. In this scenario: + * The service principal will only have view access to the resources in Project A (its original project). + * It will have contributor access to the resources in Project B (the additional project). + +You must use project-level service principals when configuring [workload identity federation](/hcp/docs/hcp/iam/service-principal/workload-identity-federation) + +## Create a service principal + +Follow similar steps to create organization-level and project-level service principals. + +### Organization-level service principals + + + + +1. Log into [the HCP portal](https://portal.cloud.hashicorp.com/) and choose your organization. +1. Click **Access control (IAM)**. +1. Click **Service principals**. +1. Click **Create service principal**. +1. Enter a service principal name. Then select the desired organization role for the service principal. +1. Click **Create service principal**. + + + + + +Use the [`hcp iam service-principals create` command](/hcp/docs/cli/commands/iam/service-principals/create) to create the service principal. + +The following example creates a service principal named `example-sp`. + +```shell-session +$ hcp iam service-principals create iam/organization/d3e891e7-35c5-4ffb-a192-567e02ec566f/service-principal/example-sp +``` + + + + + +Use the [`hcp_service_principal ` resource](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/service_principal). + +```hcl +data "hcp_organization" "my_org" { +} + +resource "hcp_service_principal" "example" { + name = "workload-sp" + parent = data.hcp_organization.my_org.resource_name +} +``` + + + + +### Project-level service principals + + + + +1. Log into [the HCP portal](https://portal.cloud.hashicorp.com/) and choose your organization. +1. Click **Projects** and then select the project you want to create a service principal in. +1. Click **Access control (IAM)**. +1. Click **Service principals**. +1. Click **Create service principal**. +1. Enter a service principal name. Then select the desired scope and role for the service principal. +1. Click **Create service principal**. + + + + + +Use the [`hcp iam service-principals create` command](/hcp/docs/cli/commands/iam/service-principals/create) to create the service principal. + +The following example creates a project-level service principal named `example-sp`. + +```shell-session +$ hcp iam service-principals create iam/project/d3e891e7-35c5-4ffb-a192-567e02ec566f/service-principal/example-sp +``` + + + + + +Use the [`hcp_service_principal ` resource](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/service_principal). + +```hcl +resource "hcp_project" "my_proj" { + name = "platform-dev" +} + +resource "hcp_service_principal" "example" { + name = "workload-sp" + parent = hcp_project.my_proj.resource_name +} +``` + + + + + +#### Cross project service principals + + + +1. Create a service principal in a project as shown above. +1. Select the other project you want to give the service principal access to +1. Click **Access control (IAM)**. +1. Click **Add new assignment**. +1. Search for the service principal by name or by ID in the search box. Searching by name will show a dropdown of all service principals from the organization and all projects within the organization that contain that name. +1. Select the service principal from the dropdown list. +1. Select the service role to assign to the service principal from the two dropdown lists: **Select service** and **Select role(s)**. +1. Click **Save**. +1. The service principal will be listed in the role assignments list with an icon, which when hovered over, shows a **Cross-project** tag. + + + + + +Use the [`hcp_service_principal ` resource](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/service_principal). + +```hcl +resource "hcp_project" "first_project" { + name = "example-first-project" +} + +resource "hcp_service_principal" "sp" { + name = "example-first-project-sp" + parent = hcp_project.first_project.resource_name +} + +resource "hcp_project" "second_project" { + name = "example-second-project" +} + +resource "hcp_project_iam_binding" "example" { + project_id = hcp_project.second_project.resource_id + principal_id = hcp_service_principal.sp.resource_id + role = "roles/contributor" +} +``` + + + + +## Delete a service principal + +Follow similar steps to delete organization-level and project-level service principals. + +Before you can delete a service principal, you must [delete all keys associated with it](/hcp/docs/hcp/iam/service-principal/key). + +### Organization-level service principals + + + + +1. Log into [the HCP portal](https://portal.cloud.hashicorp.com/) and choose your organization. +1. Click **Access control (IAM)**. +1. Click **Service principals**. +1. Click on the dropdown next to the specific service principal you want to delete. +1. Click **Delete service principal**. +1. Type `DELETE` in the prompted field and click **Delete**. + + + + + +Use the [`hcp iam service-principals delete` command](/hcp/docs/cli/commands/iam/service-principals/delete) to delete a service principal. + +The following example deletes a service principal named `example-sp`. + +```shell-session +$ hcp iam service-principals delete iam/organization/d3e891e7-35c5-4ffb-a192-567e02ec566f/service-principal/example-sp +``` + + + + + +Use the [`terraform destroy` command](/terraform/cli/commands/destroy) with the `-target` flag. + +```hcl +$ terraform destroy -target "hcp_service_principal.example" +``` + + + + +### Project-level service principals + + + + +1. Log into [the HCP portal](https://portal.cloud.hashicorp.com/) and choose your organization. +1. Click **Projects** and select the desired project to create a service principal in. +1. Click **Access control (IAM)**. +1. Click **Service principals**. +1. Click on the dropdown next to the specific service principal you want to delete. +1. Click **Delete service principal**. +1. Type `DELETE` in the prompted field and click **Delete**. + + + + + +Use the [`hcp iam service-principals delete` command](/hcp/docs/cli/commands/iam/service-principals/delete) to delete a service principal. + +The following example deletes a project-level service principal named `example-sp`. + +```shell-session +$ hcp iam service-principals delete iam/project/d3e891e7-35c5-4ffb-a192-567e02ec566f/service-principal/example-sp +``` + + + + + +Use the [`terraform destroy` command](/terraform/cli/commands/destroy) with the `-target` flag. + +```shell-session +$ terraform destroy -target "hcp_service_principal.example" +``` + + + + diff --git a/content/hcp-docs/content/docs/hcp/iam/service-principal/key.mdx b/content/hcp-docs/content/docs/hcp/iam/service-principal/key.mdx new file mode 100644 index 0000000000..31fc99848c --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/service-principal/key.mdx @@ -0,0 +1,195 @@ +--- +page_title: Service principal keys +description: |- + Service principal keys are values attached to service principals that are used to authenticate with the HCP public API. Learn how to create and delete service principal keys. +--- + +# Service principal keys + +This page describes the steps to generate and delete service principal keys using the HCP UI, HCP CLI, or HCP Terraform provider. On HCP, [service principals](/hcp/docs/hcp/iam/service-principal) are attached to authentication keys that include a pair value of the Client ID and the Client secret. The external client uses the key to authenticate with the HCP public API. + +The maximum allowed keys for a single service principal is two. + +## Generate a service principal key + +You can generate _organization-level_ and _project-level_ service principal keys. The service principal key should exist at the same level of the service principal it attaches to. + +### Organization-level keys + + + + +1. Log into [the HCP portal](https://portal.cloud.hashicorp.com/) and choose your organization. +1. Click **Access control (IAM)**. +1. Click **Service principals**. +1. Click the specific service principal to open the detailed view screen. +1. Click **Keys**. +1. Click **Generate key**. +1. Copy the Client secret and save it to a secure location for later use. + + + + + +Use the [`hcp iam service-principals keys create` command](/hcp/docs/cli/commands/iam/service-principals/keys/create) to create the service principal key. + +The following example creates a service principal key for the `example-sp` service principal. + +```shell-session +$ hcp iam service-principals keys create iam/organization/d3e891e7-35c5-4ffb-a192-567e02ec566f/service-principal/example-sp +``` + + + + + +Use the [`hcp_service_principal_key ` resource](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/service_principal_key). + +```hcl +data "hcp_organization" "my_org" { +} + +resource "hcp_service_principal" "example" { + name = "example-sp" +} + +resource "hcp_service_principal_key" "key" { + service_principal = hcp_service_principal.example.resource_name +} +``` + + + + +### Project-level keys + + + + +1. Log into [the HCP portal](https://portal.cloud.hashicorp.com/) and choose your organization. +1. Click **Projects** and select the desired project to create a service principal in. +1. Click **Access control (IAM)**. +1. Click **Service principals**. +1. Click the specific service principal to open the detailed view screen. +1. Click **Keys**. +1. Click **Generate key**. +1. Copy the Client secret and save it to a secure location for later use. + + + + + +Use the [`hcp iam service-principals keys create` command](/hcp/docs/cli/commands/iam/service-principals/keys/create) to create the service principal key. + +The following example creates a key for a project-level service principal named `example-sp`. + +```shell-session +$ hcp iam service-principals keys create iam/project/d3e891e7-35c5-4ffb-a192-567e02ec566f/service-principal/example-sp +``` + + + + + +Use the [`hcp_service_principal_key ` resource](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/service_principal_key). + +```hcl +resource "hcp_project" "my_proj" { + name = "platform-dev" +} + +resource "hcp_service_principal" "example" { + name = "workload-sp" + parent = hcp_project.my_proj.resource_name +} + +resource "hcp_service_principal_key" "key" { + service_principal = hcp_service_principal.example.resource_name +} +``` + + + + +## Delete a service principal key + +Follow similar steps to delete organization-level and project-level service principal keys. + +### Organization-level keys + + + + +1. Log into [the HCP portal](https://portal.cloud.hashicorp.com/) and choose your organization. +1. Click **Access control (IAM)**. +1. Click **Service principals**. +1. Click the specific service principal to open the detailed view screen. +1. Click **Keys**. +1. Click on the dropdown next to the specific key you want to delete. +1. Click **Delete key**. +1. Type `DELETE` in the prompted field and click **Delete**. + + + + + +Use the [`hcp iam service-principals keys delete` command](/hcp/docs/cli/commands/iam/service-principals/keys/delete) to delete a service principal key. + +The following example deletes a service principal key that is associated with the `example-sp` service principal. + +```shell-session +$ hcp iam service-principals key delete iam/organization/d3e891e7-35c5-4ffb-a192-567e02ec566f/service-principal/example-sp/key/3KgtSLWTSs +``` + + + + + +Use the [`terraform destroy` command](/terraform/cli/commands/destroy) with the `-target` flag. + +```hcl +$ terraform destroy -target "hcp_service_principal_key.key" +``` + + + + +### Project-level service principals + + + + +1. Log into [the HCP portal](https://portal.cloud.hashicorp.com/) and choose your organization. +1. Click **Projects** and select the desired project to create a service principal in. +1. Click **Access control (IAM)**. +1. Click **Service principals**. +1. Click the specific service principal to open the detailed view screen. +1. Click **Keys**. +1. Click on the dropdown next to the specific key you want to delete. +1. Click **Delete key**. +1. Type `DELETE` in the prompted field and click **Delete**. + + + + + +Use the [`hcp iam service-principals delete` command](/hcp/docs/cli/commands/iam/service-principals/delete) to delete a service principal key. + +The following example deletes a service principal key that is associated with the `example-sp` project-level service principal. + +```shell-session +$ hcp iam service-principals keys delete iam/project/d3e891e7-35c5-4ffb-a192-567e02ec566f/service-principal/example-sp/key/3KgtSLWTSs +``` + + + + + +Use the [`terraform destroy` command](/terraform/cli/commands/destroy) with the `-target` flag. + +```shell-session +$ terraform destroy -target "hcp_service_principal_key.key" +``` + + + \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/conditional-access-statement.mdx b/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/conditional-access-statement.mdx new file mode 100644 index 0000000000..e89089bb3a --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/conditional-access-statement.mdx @@ -0,0 +1,179 @@ +--- +page_title: Conditional access statements +description: |- + A conditional access statement is a boolean expression that asserts the identity's expected attributes. HCP uses conditional access statements to authenticate external workloads. +--- + +# Conditional access statements + +This page describes conditional access statements and how to format them. A conditional access statement is a boolean expression that determines the eligibility of the external credentials sent to HCP during [workload identity federation](/hcp/docs/hcp/iam/service-principal/workload-identity-federation). When HCP evaluates the conditional access statement as `true`, it accepts the credential. + +Conditional access statements allow additional control over which credentials and workloads can receive an HCP access token. They help ensure that only the intended workloads are allowed to access HCP services. + +## Create an expression + +A single expression is a matching operator with a [_selector_](#selector) and [_value_](#value). They are written in plain text format, and they support boolean logic and parenthesization. In general whitespace is ignored, except within literal strings. + +### Matching operators + +All matching operators use a selector or value to choose what data should be matched. The following reference provides supported expressions for matching selectors and values. + +```text +// Equality & Inequality checks + == "" + != "" + +// Emptiness checks + is empty + is not empty + +// Contains checks or Substring Matching +"" in +"" not in + contains "" + not contains "" + +// Regular Expression Matching + matches "" + not matches "" +``` + +## Selectors + +Selectors create an expression. Define a selector using dot notation (`name.name.name`). Each name must start with an ASCII letter, and it can contain ASCII letters, numbers, and underscores. + +When part of the selector references a map value, you can use the form `[""]` instead of `.`. This syntax allows the possibility of using map keys that are not valid selectors by themselves. + +The following example demonstrates how to format a selector. + +```text +// Accessing nested claims +// https://cloud.google.com/compute/docs/instances/verifying-instance-identity#payload +jwt_claims.google.compute_engine.project_id + +// Also selects the same key +jwt_claims["google"]["compute_engine"]["project_id"] +``` + +### AWS + +With AWS, you can use the following selectors, which correspond to the [AWS GetCallerIdentity](https://docs.aws.amazon.com/STS/latest/APIReference/API_GetCallerIdentity.html) response. + +- `aws.arn`: The AWS ARN associated with the calling entity +- `aws.account_id`: The AWS account ID number of the account that owns or contains the calling entity +- `aws.user_id`: The unique identifier of the calling entity + +### OIDC Providers + +When exchanging an OIDC Token for access to HCP, the conditional access statement has access to all the claims in the token with the `jwt_claims.` prefix. The following example is a token for a [GitHub Actions Workflow](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#understanding-the-oidc-token). + + + +```json +{ + "jti": "example-id", + "sub": "repo:octo-org/octo-repo:environment:prod", + "environment": "prod", + "aud": "https://github.com/octo-org", + "ref": "refs/heads/main", + "sha": "example-sha", + "repository": "octo-org/octo-repo", + "repository_owner": "octo-org", + "actor_id": "12", + "repository_visibility": "private", + "repository_id": "74", + "repository_owner_id": "65", + "run_id": "example-run-id", + "run_number": "10", + "run_attempt": "2", + "runner_environment": "github-hosted", + "actor": "octocat", + "workflow": "example-workflow", + "head_ref": "", + "base_ref": "", + "event_name": "workflow_dispatch", + "ref_type": "branch", + "job_workflow_ref": "octo-org/octo-automation/.github/workflows/oidc.yml@refs/heads/main", + "iss": "https://token.actions.githubusercontent.com", + "nbf": 1632492967, + "exp": 1632493867, + "iat": 1632493567 +} +``` + + + +The following table contains a non-exhaustive list of valid selectors and the values they match: + +| Selector | Matched value | +| :----------------------- | :------------------- | +| `jwt_claims.repository` | `octo-org/octo-repo` | +| `jwt_claims.workflow` | `example-workflow` | +| `jwt_claims.environment` | `prod` | + +## Values + +Operators match values when evaluating an expression. + +Values can be any valid selector, a number, or a string. Numbers can be either base 10 integers or floating point numbers. + +It is a best practice to use quotation marks with values. When quoting strings, you can use double quotes or backticks. When enclosed in backticks, values are treated as raw strings and escape sequences such as `\n` are expanded. + +## Compound expressions + +There are several methods for connecting expressions into larger compound expressions. You can connect expressions with one or more of the following: + +- logical `or` +- logical `and` +- logical `not` +- grouping with parentheses +- matching expressions + +The following example demonstrates common syntax options for compound expressions. + +```text +// Logical Or - evaluates to true if either sub-expression does + or + +// Logical And - evaluates to true if both sub-expressions do + and + +// Logical Not - evaluates to true if the sub-expression does not +not + +// Grouping - Overrides normal precedence rules +( ) + +// Inspects data to check for a match + +``` + +Standard operator precedence applies to expressions. For example, the following two expressions are equivalent: + +```text + and not or + +( and (not )) or +``` + +## Examples + +The following examples demonstrate common patterns for conditional access statement expressions. + +Use `matches` to restricts access from workloads with the AWS Role `my-app-role`. + +```text +`aws.arn matches ^arn:aws:sts::123456789012:assumed-role/my-app-role/` +``` + +Use `matches` and `or` to restrict access from workloads with the AWS Role `my-app-role` or `my-other-app-role`. + +```text +`aws.arn matches ^arn:aws:sts::123456789012:assumed-role/my-app-role/ or aws.arn matches ^arn:aws:sts::123456789012:assumed-role/my-other-app-role/` +``` + +Use `==` to restrict access to GCP workloads with the Service Account `106356042740441904560` and running in the project ID `my-app-project-191923`. + +```text +`jwt_claims.sub == 106356042740441904560 and jwt_claims.google.compute_engine.project_id == my-app-project-191923` +``` \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/aws.mdx b/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/aws.mdx new file mode 100644 index 0000000000..51c1681294 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/aws.mdx @@ -0,0 +1,185 @@ +--- +page_title: Federate workload identity with AWS +description: |- + Workload identity federation enables external workloads to access HCP services through an external identity provider. Learn how to configure the AWS identity provider and the HCP platform so that external workloads can authenticate with the HCP identity service. +--- + +# Federate workload identity with AWS + +This page describes how to set up workload identity federation with AWS so that HCP authenticates external workloads, such as those running on EC2 or Lambda. Authenticated workloads can interact with HCP services without storing any HCP service principal keys. + +## Prerequisites + +You must complete the following steps before configuring a workload identity provider for AWS: + +- You must have the `Admin` role on the HCP project. +- [Create a service principal in the desired project](/hcp/docs/hcp/iam/service-principal#project-level-service-principals) and grant it access to the HCP resources your workload requires. +- Have access to the AWS account to federate access to HCP from. +- Install the [HCP CLI](/hcp/docs/cli/install) or use the [HCP Terraform Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs) based on your desired configuration workflow. + +## Configure a workload identity provider for AWS + +To create an HCP workload identity provider for AWS, you must provide the following configuration values: + +- The AWS Account ID +- A conditional access statement + +If you do not know your AWS Account ID, [refer to the AWS documentation for guidance](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-identifiers.html). + +### Conditional access statement + +When federating workload identity, one of the requirements is a [conditional access statement](/hcp/docs/hcp/iam/service-principal/workload-identity-federation/conditional-access-statement). This statement is a boolean expression that has access to the identity claims and restricts which external identities are allowed. + +When using AWS, you can access the following selectors, which correspond to the [AWS GetCallerIdentity](https://docs.aws.amazon.com/STS/latest/APIReference/API_GetCallerIdentity.html) response. + +- `aws.arn`: The AWS ARN associated with the calling entity +- `aws.account_id`: The AWS account ID number of the account that owns or contains the calling entity +- `aws.user_id`: The unique identifier of the calling entity + +To check the values of these fields, run the [`aws sts get-caller-identity` command](https://docs.aws.amazon.com/cli/latest/reference/sts/get-caller-identity.html#examples) on the workload instance you are trying to configure. + +Use these values to build a conditional access statement. The most useful attribute is `aws.arn`, which you format according to the following syntax: + +```text +`arn:aws:sts:::assumed-role//` +``` + +In the following example, `123456789012` is the AWS Account ID, `my-app-role` is the [AWS IAM role attached to the EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html), and `i-00000000000000000` is the EC2 Instance ID. + +```text +`arn:aws:sts::123456789012:assumed-role/my-app-role/i-00000000000000000` +``` + +The following conditional access statement restricts access to HCP services to instances with the `my-app-role` IAM role attached. + +```text +`aws.arn matches “^arn:aws:sts::123456789012:assumed-role/my-app-role/”` +``` + +## Create the workload identity provider + +After you gather the information you need, create the workload identity provider for AWS. + + + + + +Use the [`hcp iam workload-identity-providers create-aws` command](/hcp/docs/cli/commands/iam/workload-identity-providers/create-aws). + +```shell-session +$ hcp iam workload-identity-providers create-aws \ + --account-id= \ + --conditional-access= \ + --service-principal= \ + --description= +``` + +This command requires the following information that is specific to your AWS and HCP accounts: + +- ``: The name of workload identity provider to create. +- ``: The AWS Account ID that you want to allow federation from. +- ``: The conditional access statement that restricts access to the specified AWS workloads. +- ``: The service principal’s resource name, in the format `iam/project//service-principal/`. +- ``: An optional description for the provider. + +The following example creates a workload identity provider named `aws-example`. This provider configures HCP to restrict access to external workloads by requiring that they have the `my-app-role` AWS IAM role attached. Workloads that are granted access to HCP services use the `my-app-runtime` service principal. + +```shell-session +$ hcp iam workload-identity-providers create-aws aws-example \ + --account-id=123456789012 \ + --conditional-access='aws.arn matches "^arn:aws:iam::123456789012:role/my-app-role/"' \ + --service-principal=iam/project/dcffbc8c-0873-4acc-bf96-4c79a4c3fd1a/service-principal/my-app-runtime \ + --description="Allow my-app-role on AWS to act as my-app-runtime service principal" +``` + + + + + +Use the [`hcp_iam_workload_identity_provider` resource](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/iam_workload_identity_provider). + +```hcl +# Replace with an existing service principal if created ahead of time. +resource "hcp_service_principal" "workload_sp" { + name = "my-app-runtime" + } + +resource "hcp_iam_workload_identity_provider" "example" { + name = "aws-example" + service_principal = hcp_service_principal.workload_sp.resource_name + description = "Allow my-app on AWS to act as my-app-runtime service principal" + + aws { + # Only allow workloads from this AWS Account to exchange identity + account_id = "" + } + + conditional_access = "" +} +``` + +This configuration requires the following information that is specific to your AWS and HCP accounts: + +- ``: The AWS Account ID that you want to allow federation from. +- ``: The conditional access statement that restricts access to the specified AWS workloads. + +The following example creates a workload identity provider named `aws-example`. This provider configures HCP to restrict access to external workloads by requiring that they have the `my-app-role` AWS IAM role attached. Workloads that are granted access to HCP services use the `my-app-runtime` service principal. + +```hcl +# Replace with an existing service principal if created ahead of time. +resource "hcp_service_principal" "workload_sp" { + name = "my-app-runtime" + } + +resource "hcp_iam_workload_identity_provider" "example" { + name = "aws-example" + service_principal = hcp_service_principal.workload_sp.resource_name + description = "Allow my-app-role on AWS to act as my-app-runtime service principal" + + aws { + # Only allow workloads from this AWS Account to exchange identity + account_id = "123456789012" + } + + # Restrict access to workloads running with "my-app-role". + conditional_access = "aws.arn matches `^arn:aws:sts::123456789012:assumed-role/my-app-role/`" +} +``` + + + + +## Authenticate the workload's credentials + +You can use the [HCP CLI](/hcp/docs/cli/install), [HCP Terraform Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs), or [HCP Go SDK](https://github.com/hashicorp/hcp-sdk-go/tree/main) to automatically retrieve external credentials and exchange them for an HCP access token. This process uses a credential file that contains the information required to obtain external credentials and the workload identity provider to exchange them with. + +For more information, refer to [credential files](/hcp/docs/hcp/iam/service-principal/workload-identity-federation#credential-files). + +## Create the credential file for AWS + +Use the [`hcp iam workload-identity-providers create-cred-file` command](/hcp/docs/cli/commands/iam/workload-identity-providers/create-cred-file). + +```shell-session +$ hcp iam workload-identity-providers create-cred-file \ + --aws \ + --output-file=credentials.json +``` + +This command requires the following information that is specific to your HCP account: + +- ``: The name of Workload Identity Provider to exchange credentials with, in the format `iam/project//service-principal//workload-identity-provider/`. + +The following example creates a `credentials.json` file using the `aws-example` provider, which is associated with the `my-app-runtime` service principal. + +```shell-session +$ hcp iam workload-identity-providers create-cred-file \ + iam/project/dcffbc8c-0873-4acc-bf96-4c79a4c3fd1a/service-principal/my-app-runtime/workload-identity-provider/aws-example \ + --aws \ + --output-file=credentials.json +``` + +If you are using [IMDSv1](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html), set the `–imdsv1` flag when you run `create-cred-file`. + +Ensure the credential file exists in the runtime environment. Because the credential file contains no secret values, you can store the credential file in the AMI, Lambda deployment package, or a container. Alternatively, you can generate it at runtime. + +To use the credential file to authenticate, you can use the HCP CLI, the HCP Terraform provider, or the HCP Go SDK. Refer to [use a credential file to authenticate](/hcp/docs/hcp/iam/service-principal/workload-identity-federation#use-a-credential-file-to-authenticate) for more information. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/azure.mdx b/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/azure.mdx new file mode 100644 index 0000000000..ce9c1ee2fe --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/azure.mdx @@ -0,0 +1,205 @@ +--- +page_title: Federate workload identity with Azure +description: |- + Workload identity federation enables external workloads to access HCP services through an external identity provider. Learn how to configure the Azure identity provider and the HCP platform so that external workloads can authenticate with the HCP identity service. +--- + +# Federate workload identity with Azure + +This page describes how to set up workload identity federation to authenticate from Azure VM workloads using [managed identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview). Authenticated workloads can interact with HCP services without storing any HCP service principal keys. + +## Prerequisites + +You must complete the following steps before configuring a workload identity provider for Azure: + +- You must have the `Admin` role on the HCP project. +- [Create a service principal in the desired project](/hcp/docs/hcp/iam/service-principal#project-level-service-principals) and grant it access to the HCP resources your workload requires. +- Have access to the Azure account to federate access to HCP from. +- Install the [HCP CLI](/hcp/docs/cli/install) or use the [HCP Terraform Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs) based on your desired configuration workflow. + +### Azure configuration prerequisites + +To federate workload identity with Azure, you need to create and configure a new [Azure AD application](https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals#application-object) in your Azure AD tenant. After you configure the workload identity provider to trust the application, Azure workloads can retrieve access tokens for this application and exchange them for HCP access tokens. + +Complete the following steps to prepare your Azure environment for workload identity federation: + +1. [Register a Microsoft Entra app and create a service principal](https://docs.microsoft.com/en-au/azure/active-directory/develop/howto-create-service-principal-portal#register-an-application-with-azure-ad-and-create-a-service-principal). You can use the default [Application (client) ID](https://learn.microsoft.com/en-au/entra/identity-platform/howto-create-service-principal-portal#sign-in-to-the-application) or specify a custom URI, but make note of Application ID URI. You need this value when you configure the workload identity provider. +1. [Create a managed identity](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal). Note its [Object ID](https://learn.microsoft.com/en-us/entra/identity-platform/app-objects-and-service-principals?tabs=browser#service-principal-object). You can use this value when you configure the conditional access statement. +1. [Assign the managed identity](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#user-assigned-managed-identity). You can assign it to a virtual machine or another resource where your application is running. + +## Configure an external workload identity provider for Azure + +To create an HCP workload identity provider for Azure, you must provide the following configuration values: + +- Your [Azure AD Tenant ID (GUID)](https://learn.microsoft.com/en-us/azure/azure-portal/get-subscription-tenant-id). +- The Application ID URI of the application you registered in Azure AD. +- A conditional access statement. + +### Conditional access statement + +When federating workload identity, one of the requirements is a [conditional access statement](/hcp/docs/hcp/iam/service-principal/workload-identity-federation/conditional-access-statement). This statement is a boolean expression that has access to the identity claims and restricts which external identities are allowed. + +When exchanging an Azure token for access to HCP, the conditional access statement has access to all the claims in the token with the `jwt_claims.` prefix. For most scenarios, we recommend using the `sub` claim that matches the `Object ID` of the managed identity attached to the workload. For more information, refer to the [access token claim reference in the Azure documentation](https://learn.microsoft.com/en-us/entra/identity-platform/access-token-claims-reference). + +The following example conditional access statement restricts access to HCP services to Azure workloads assigned the managed role whose `Object ID` is `d4766c62-e179-49f9-b3a8-3a8c6720aa96`. + +```text +`jwt_claims.sub == "d4766c62-e179-49f9-b3a8-3a8c6720aa96"` +``` + +For a complete list of claims you can reference, complete the following steps: + +1. Create a [VM that has the managed identity assigned to it](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#user-assigned-managed-identity). +1. SSH to the VM. +1. Obtain an access token using the [IMDS endpoint](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#get-a-token-using-http). Replace the `APP_ID` with the Application ID from the Azure AD application or the custom URI [described in the Azure configuration prerequisites](#azure-configuration-prerequisites). + + ```shell-session + curl "http://169.254.169.254/metadata/identity/oauth2/token?resource=APP_ID&api-version=2018-02-01" \ + -H "Metadata: true" | jq -r .access_token + ``` + +1. To view the claims, copy the access token and paste it into a [browser-based token decoder](https://jwt.ms/). The claims listed are values available when creating the conditional access statement. + +## Create the workload identity provider + +After you gather the information you need, create the workload identity provider for Azure. + + + + +Use the [`hcp iam workload-identity-providers create-oidc` command](/hcp/docs/cli/commands/iam/workload-identity-providers/create-oidc). + +```shell-session +$ hcp iam workload-identity-providers create-oidc \ + --service-principal= \ + --issuer=https://sts.windows.net// \ + --allowed-audience= \ + --conditional-access= \ + --description= +``` + +This command requires the following information that is specific to your HCP and Azure accounts: + +- ``: The name of workload identity provider to create. +- ``:The service principal’s resource name, in the format `iam/project//service-principal/`. +- ``: The [Azure AD Tenant ID (GUID)](https://learn.microsoft.com/en-us/azure/azure-portal/get-subscription-tenant-id) you want to allow federation from. Sometimes this ID is formatted as `https://sts.windows.net//`. +- ``: The `Application ID` from the Azure AD Application, or the custom URI used. +- ``: The conditional access statement that restricts access to the specified Azure workload. +- ``: An optional description for the provider. + +The following example creates a workload identity provider named `azure-example`. This provider configures HCP to restrict access to external workloads by requiring that their JWT token have `d4766c62-e179-49f9-b3a8-3a8c6720aa96` as a `sub` claim. Workloads that are granted access to HCP services use the `my-app-runtime` service principal. + +```shell-session +$ hcp iam workload-identity-providers create-oidc azure-example \ + --service-principal=iam/project/dcffbc8c-0873-4acc-bf96-4c79a4c3fd1a/service-principal/my-app-runtime \ + --issuer=https://sts.windows.net/60a0d497-45cd-413d-95ca-e154bbb9129b/ \ + --allowed-audience=d821efa3-8cd7-4977-bdf7-bd6e44b1dc46 \ + --conditional-access='jwt_claims.sub == "d4766c62-e179-49f9-b3a8-3a8c6720aa96"' \ + --description="Allow my-app-role on Azure to act as my-app-runtime service principal" +``` + + + + + +Use the [`hcp_iam_workload_identity_provider` resource](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/iam_workload_identity_provider). + +```hcl +# Replace with an existing service principal if created ahead of time. +resource "hcp_service_principal" "workload_sp" { + name = "my-app-runtime" +} + +resource "hcp_iam_workload_identity_provider" "example" { + name = "azure-example" + service_principal = hcp_service_principal.workload_sp.resource_name + description = "Allow my-app on Azure to act as my-app-runtime service principal" + + oidc { + # The issuer URI should be as follows where the ID in the path is replaced + # with your Azure Tenant ID + issuer_uri = "https://sts.windows.net/" + + # The allowed audience should be set to the Application ID from the Azure AD + # Application or the custom URI used. + allowed_audiences = [""] + } + + conditional_access = "" +} +``` + +This configuration requires the following information that is specific to your Azure account: + +- ``: The [Azure AD Tenant ID (GUID)](https://learn.microsoft.com/en-us/azure/azure-portal/get-subscription-tenant-id) you want to allow federation from. Sometimes this ID is formatted as `https://sts.windows.net//`. +- ``: The `Application ID` from the Azure AD Application, or the custom URI used. +- ``: The conditional access statement that restricts access to the specified Azure workload. + +The following example creates a workload identity provider named `azure-example`. This provider configures HCP to restrict access to external workloads by requiring that their JWT token have `d4766c62-e179-49f9-b3a8-3a8c6720aa96` as a `sub` claim. Workloads that are granted access to HCP services use the `my-app-runtime` service principal. + +```hcl +# Replace with an existing service principal if created ahead of time. +resource "hcp_service_principal" "workload_sp" { + name = "my-app-runtime" +} + +resource "hcp_iam_workload_identity_provider" "example" { + name = "azure-example" + service_principal = hcp_service_principal.workload_sp.resource_name + description = "Allow my-app-role on Azure to act as my-app-runtime service principal" + + oidc { + # The issuer URI should be as follows where the ID in the path is replaced + # with your Azure Tenant ID. + issuer_uri = "https://sts.windows.net/60a0d497-45cd-413d-95ca-e154bbb9129b" + + # The allowed audience should be set to the Application ID from the Azure AD + # Application or the custom URI used. + allowed_audiences = ["d821efa3-8cd7-4977-bdf7-bd6e44b1dc46"] + } + + # Only allow workload's that are assigned the expected managed identity. + # The access_token given to Azure workload's will have the sub claim set to + # that of the managed identity. + conditional_access = "jwt_claims.sub == `d4766c62-e179-49f9-b3a8-3a8c6720aa96`" +} +``` + + + + +## Authenticate the workload's credentials + +You can use the [HCP CLI](/hcp/docs/cli/install), [HCP Terraform Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs), or [HCP Go SDK](https://github.com/hashicorp/hcp-sdk-go/tree/main) to automatically retrieve external credentials and exchange them for an HCP access token. This process uses a credential file that contains the information required to obtain external credentials and the workload identity provider to exchange them with. + +For more information, refer to [credential files](/hcp/docs/hcp/iam/service-principal/workload-identity-federation#credential-files). + +### Create the credential file for Azure + +Use the [`hcp iam workload-identity-providers create-cred-file command`](/hcp/docs/cli/commands/iam/workload-identity-providers/create-cred-file). + +```shell-session +$ hcp iam workload-identity-providers create-cred-file \ + --azure \ + --azure-resource= \ + --output-file=credentials.json +``` + +This command requires the following information that is specific to your HCP and Azure accounts: + +- ``: The name of Workload Identity Provider to exchange credentials with, in the format `iam/project//service-principal//workload-identity-provider/`. +- ``: The `Application ID` from the Azure AD Application, or the custom URI used. + +The following example creates a `credentials.json` file using the `azure-example` provider, which is associated with the `my-app-runtime` service principal. + +```shell-session +$ hcp iam workload-identity-providers create-cred-file \ + iam/project/dcffbc8c-0873-4acc-bf96-4c79a4c3fd1a/service-principal/my-app-runtime/workload-identity-provider/azure-example \ + --azure \ + --azure-resource=d821efa3-8cd7-4977-bdf7-bd6e44b1dc46 \ + --output-file=credentials.json +``` + +Ensure the credential file exists in the runtime environment. Because the credential file contains no secret values, you can store the credential file in the VM image. Alternatively, you can generate it at runtime. + +To use the credential file to authenticate, you can use the HCP CLI, the HCP Terraform provider, or the HCP Go SDK. Refer to [use a credential file to authenticate](/hcp/docs/hcp/iam/service-principal/workload-identity-federation#use-a-credential-file-to-authenticate) for more information. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/gcp.mdx b/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/gcp.mdx new file mode 100644 index 0000000000..a199cca2f0 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/gcp.mdx @@ -0,0 +1,179 @@ +--- +page_title: Federate workload identity with GCP +description: |- + Workload identity federation enables external workloads to access HCP services through an external identity provider. Learn how to configure the GCP identity provider and the HCP platform so that external workloads can authenticate with the HCP identity service. +--- + +# Federate workload identity with GCP + +This page describes how to set up workload identity federation to authenticate from GCP workloads. Authenticated workloads can interact with HCP services without storing any HCP service principal keys. + +## Prerequisites + +You must complete the following steps before configuring a workload identity provider for GCP: + +- You must have the `Admin` role on the HCP project. +- [Create a service principal in the desired project](/hcp/docs/hcp/iam/service-principal#project-level-service-principals) and grant it access to the HCP resources your workload requires. +- Have access to the GCP account to federate access to HCP from. +- Install the [HCP CLI](/hcp/docs/cli/install) or use the [HCP Terraform Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs) based on your desired configuration workflow. + +### GCP configuration prerequisites + +To federate workload identity with GCP, you need to [create a Service Account](https://cloud.google.com/iam/docs/service-account-overview) in your GCP Project and then attach it to your workload, such as [a VM you created](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#console_1) or [a Cloud Run Service](https://cloud.google.com/run/docs/securing/service-identity). After you configure the workload identity provider to trust Service Account, your GCP workloads can retrieve GCP access tokens and exchange them for HCP access tokens. + +Complete the following steps to prepare your GCP environment for workload identity federation: + +1. [Create a GCP Service Account](https://cloud.google.com/iam/docs/service-accounts-create). After you create it, go the "Service account details" page and make note of the Service Account’s `Unique ID`. You need this value when you configure the workload identity provider. +1. [Assign the managed identity to a virtual machine](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#using) or another resource where your application runs. + +## Configure an external workload identity provider for GCP + +To create an HCP workload identity provider for GCP, you must provide the following configuration values: + +- The `Unique ID` of the Service Account you created. +- A conditional access statement. + +### Conditional access statement + +When federating workload identity, one of the requirements is a [conditional access statement](/hcp/docs/hcp/iam/service-principal/workload-identity-federation/conditional-access-statement). This statement is a boolean expression that has access to the identity claims and restricts which external identities are allowed. + +When exchanging a GCP token for access to HCP, the conditional access statement has access to all the claims in the token with the `jwt_claims.` prefix. For most scenarios, we recommend using the `sub` claim that matches the `Unique ID` of the Service Account attached to the workload. For more information, refer to the [access token claim reference in the GCP documentation](https://cloud.google.com/compute/docs/instances/verifying-instance-identity#payload). + +The following example conditional access statement restricts access to HCP services to GCP workloads assigned the Service Account whose `Unique ID` is `106356042740441904560`. + +```text +`jwt_claims.sub == "106356042740441904560"` +``` + +For a complete list of claims you can reference, complete the following steps: + +1. [Create a VM that has the Service Account assigned to it](https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances). +1. [SSH to the VM](https://cloud.google.com/compute/docs/connect/standard-ssh). +1. [Obtain an identity token using the metadata server](https://cloud.google.com/compute/docs/instances/verifying-instance-identity#request_signature). The following example uses `hcp` for its AUD claim, and specifies `full` for the format. + + ```shell-session + curl -H "Metadata-Flavor: Google" \ + 'http://metadata/computeMetadata/v1/instance/service-accounts/default/identity?audience=hcp&format=full' + ``` + +1. To view the claims, copy the access token and paste it into a [browser-based token decoder](https://jwt.ms/). The claims listed are values available when creating the conditional access statement. + +## Create the workload identity provider + +After you gather the information you need, create the workload identity provider for GCP. + + + + +Use the [`hcp iam workload-identity-providers create-oidc` command](/hcp/docs/cli/commands/iam/workload-identity-providers/create-oidc). + +```shell-session +$ hcp iam workload-identity-providers create-oidc \ + --service-principal= \ + --issuer=https://accounts.google.com \ + --conditional-access= \ + --description= +``` + +This command requires the following information that is specific to your HCP and GCP accounts: + +- ``: The name of workload identity provider to create. +- ``:The service principal’s resource name, in the format `iam/project//service-principal/`. +- ``: The conditional access statement that restricts access to the specified GCP workload. +- ``: An optional description for the provider. + +The following example creates a workload identity provider named `gcp-example`. This provider configures HCP to restrict access to external workloads by requiring that their JWT token have `106356042740441904560` as a `sub` claim. Workloads that are granted access to HCP services use the `my-app-runtime` service principal. + +```shell-session +$ hcp iam workload-identity-providers create-oidc gcp-example \ + --service-principal=iam/project/dcffbc8c-0873-4acc-bf96-4c79a4c3fd1a/service-principal/my-app-runtime \ + --issuer=https://accounts.google.com \ + --conditional-access='jwt_claims.sub == "106356042740441904560"' \ + --description="Allow my-app Service Account on GCP to act as my-app-runtime service principal" +``` + + + + + +Use the [`hcp_iam_workload_identity_provider` resource](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/iam_workload_identity_provider). + +```hcl +# Replace with an existing service principal if created ahead of time. +resource "hcp_service_principal" "workload_sp" { + name = "my-app-runtime" +} + +resource "hcp_iam_workload_identity_provider" "example" { + name = "gcp-example" + service_principal = hcp_service_principal.workload_sp.resource_name + description = "Allow my-app Service Account on GCP to act as my-app-runtime service principal" + + oidc { + issuer_uri = "https://accounts.google.com" + } + + conditional_access = "" +} +``` + +This configuration requires the following information that is specific to your GCP account: + +- ``: The conditional access statement that restricts access to the specified GCP workload. + +The following example creates a workload identity provider named `gcp-example`. This provider configures HCP to restrict access to external workloads by requiring that their JWT token have `106356042740441904560` as a `sub` claim. Workloads that are granted access to HCP services use the `my-app-runtime` service principal. + +```hcl +# Replace with an existing service principal if created ahead of time. +resource "hcp_service_principal" "workload_sp" { + name = "my-app-runtime" +} + +resource "hcp_iam_workload_identity_provider" "example" { + name = "gcp-example" + service_principal = hcp_service_principal.workload_sp.resource_name + description = "Allow my-app Service Account on GCP to act as my-app-runtime service principal" + + oidc { + issuer_uri = "https://accounts.google.com" + } + + conditional_access = "jwt_claims.sub == `106356042740441904560`" +} +``` + + + + +## Authenticate the workload's credentials + +You can use the [HCP CLI](/hcp/docs/cli/install), [HCP Terraform Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs), or [HCP Go SDK](https://github.com/hashicorp/hcp-sdk-go/tree/main) to automatically retrieve external credentials and exchange them for an HCP access token. This process uses a credential file that contains the information required to obtain external credentials and the workload identity provider to exchange them with. + +For more information, refer to [credential files](/hcp/docs/hcp/iam/service-principal/workload-identity-federation#credential-files). + +### Create the credential file for GCP + +Use the [`hcp iam workload-identity-providers create-cred-file command`](/hcp/docs/cli/commands/iam/workload-identity-providers/create-cred-file). + +```shell-session +$ hcp iam workload-identity-providers create-cred-file \ + --gcp \ + --output-file=credentials.json +``` + +This command requires the following information that is specific to your HCP account: + +- ``: The name of workload identity provider to exchange credentials with, in the format `iam/project//service-principal//workload-identity-provider/`. + +The following example creates a `credentials.json` file using the `gcp-example` provider, which is associated with the `my-app-runtime` service principal. + +```shell-session +$ hcp iam workload-identity-providers create-cred-file \ + iam/project/dcffbc8c-0873-4acc-bf96-4c79a4c3fd1a/service-principal/my-app-runtime/workload-identity-provider/gcp-example \ + --gcp \ + --output-file=credentials.json +``` + +Ensure the credential file exists in the runtime environment. Because the credential file contains no secret values, you can store the credential file in the VM image. Alternatively, you can generate it at runtime. + +To use the credential file to authenticate, you can use the HCP CLI, the HCP Terraform provider, or the HCP Go SDK. Refer to [use a credential file to authenticate](/hcp/docs/hcp/iam/service-principal/workload-identity-federation#use-a-credential-file-to-authenticate) for more information. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/github.mdx b/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/github.mdx new file mode 100644 index 0000000000..d85c96456f --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/github.mdx @@ -0,0 +1,183 @@ +--- +page_title: Federate workload identity with GitHub +description: |- + Workload identity federation enables external workloads to access HCP services through an external identity provider. Learn how to configure the GitHub identity provider and the HCP platform so that external workloads can authenticate with the HCP identity service. +--- + +# Federate workload identity with GitHub + +This page describes how to set up workload identity federation to authenticate from a [GitHub Actions Workflow](https://docs.github.com/en/actions/using-workflows/about-workflows) using a [GitHub OIDC token](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect). Authenticated workloads can interact with HCP services without storing any HCP service principal keys. + +## Prerequisites + +You must complete the following steps before configuring a workload identity provider for GitHub: + +- You must have the `Admin` role on the HCP project. +- [Create a service principal in the desired project](/hcp/docs/hcp/iam/service-principal#project-level-service-principals) and grant it access to the HCP resources your workload requires. +- Have access to the GitHub Workflow to federate access to HCP from. +- Install the [HCP CLI](/hcp/docs/cli/install) or use the [HCP Terraform Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs) based on your desired configuration workflow. + +## Configure an external workload identity provider for GitHub + +To create an HCP workload identity provider for GitHub Actions, you need a conditional access statement. + +### Conditional access statement + +When federating workload identity, one of the requirements is a [conditional access statement](/hcp/docs/hcp/iam/service-principal/workload-identity-federation/conditional-access-statement). This statement is a boolean expression that has access to the identity claims and restricts which external identities are allowed. + +When exchanging a GitHub OIDC token for access to HCP, the conditional access statement has access to all the claims in the token with the `jwt_claims.` prefix. For more information, refer to the [OIDC token claim reference in the GitHub documentation](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#understanding-the-oidc-token). + +The following example conditional access statement restricts access to HCP services to GitHub Actions that originate from `acme-org/acme-repo` GitHub repository. + +```text +`jwt_claims.repository == "acme-org/acme-repo"` +``` + +To restrict access further so that only requests from the Git branch `main` are authenticated, you can add an additional selector and value. + +```text +`jwt_claims.repository == "acme-org/acme-repo" and jwt_claims.ref == "refs/heads/main"` +``` + +For more information about using selectors and values to create compound conditional access statements, refer to [conditional access statements](/hcp/docs/hcp/iam/service-principal/workload-identity-federation/conditional-access-statement). + +## Create the workload identity provider + +After you gather the information you need, create the workload identity provider for GitHub. + + + + +Use the [`hcp iam workload-identity-providers create-oidc` command](/hcp/docs/cli/commands/iam/workload-identity-providers/create-oidc). + +```shell-session +$ hcp iam workload-identity-providers create-oidc \ + --service-principal= \ + --issuer=https://token.actions.githubusercontent.com \ + --conditional-access='' \ + --description= +``` + +This command requires the following information that is specific to your HCP and GitHub accounts: + +- ``: The name of workload identity provider to create. +- ``:The service principal’s resource name, in the format `iam/project//service-principal/`. +- ``: The conditional access statement that restricts access to the specified GitHub repository and branch. +- ``: An optional description for the provider. + +The following example creates a workload identity provider named `github-example`. This provider configures HCP to restrict access to GitHub Actions by requiring that the JWT token identifies the action as originating from the `main` branch of the `acme-org/acme-repo` repository. Workloads that are granted access to HCP services use the `my-app-deployer` service principal. + +```shell-session +$ hcp iam workload-identity-providers create-oidc github-example \ + --service-principal=iam/project/dcffbc8c-0873-4acc-bf96-4c79a4c3fd1a/service-principal/my-app-deployer \ + --issuer=https://token.actions.githubusercontent.com \ + --conditional-access='jwt_claims.repository == "acme-org/acme-repo" and jwt_claims.ref == "refs/heads/main"' \ + --description="Allow acme-repo deploy workflow to access my-app-deployer service principal" +``` + + + + + +Use the [`hcp_iam_workload_identity_provider` resource](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/iam_workload_identity_provider). + +```hcl +# Replace with an existing service principal if created ahead of time. +resource "hcp_service_principal" "deployment_sp" { + name = "my-app-deployer" +} + +resource "hcp_iam_workload_identity_provider" "example" { + name = "github-example" + service_principal = hcp_service_principal.deployment_sp.resource_name + description = "Allow acme-repo deploy workflow to access my-app-runtime service principal" + + oidc { + issuer_uri = "https://token.actions.githubusercontent.com" + } + + conditional_access = "" +} +``` + +This configuration requires the following information that is specific to your GitHub account: + +- ``: The conditional access statement that restricts access to the specified repository and branch. + +The following example creates a workload identity provider named `github-example`. This provider configures HCP to restrict access to GitHub Actions by requiring that the JWT token identifies the action as originating from the `main` branch of the `acme-org/acme-repo` repository. Workloads that are granted access to HCP services use the `my-app-deployer` service principal. + +```hcl +# Replace with an existing service principal if created ahead of time. +resource "hcp_service_principal" "deployment_sp" { + name = "my-app-deployer" +} + +resource "hcp_iam_workload_identity_provider" "example" { + name = "github-example" + service_principal = hcp_service_principal.deployment_sp.resource_name + description = "Allow acme-repo deploy workflow to access my-app-deployer service principal" + + oidc { + issuer_uri = "https://token.actions.githubusercontent.com" + } + + conditional_access = "jwt_claims.repository == `acme-org/acme-repo` and jwt_claims.ref == `refs/heads/main`" +} +``` + + + + +## Configure the GitHub Actions workflow + +The [`hashicorp/hcp-auth-action` GitHub Action](https://github.com/hashicorp/hcp-auth-action) automatically generates a credential file during workflow execution. You can use the [HCP CLI](/hcp/docs/cli/install) to automatically retrieve external credentials and exchange them for an HCP access token. + +Add the following configuration to your GitHub Actions YAML file. + +```yaml +jobs: + job_id: + permissions: + contents: 'read' + id-token: 'write' + + steps: + - id: 'Authenticate to HCP' + - uses: 'hashicorp/hcp-auth-action@v0' + with: + workload_identity_provider: +``` + +This command requires the following information that is specific to your HCP account: + +- ``: The name of workload identity provider to exchange credentials with, in the format `iam/project//service-principal//workload-identity-provider/`. + +The following example uses the [`hashicorp/hcp-auth-action` GitHub Action](https://github.com/hashicorp/hcp-auth-action) to authenticate with HCP. Then it runs the [`hashicorp/hcp-setup-action` GitHub Action](https://github.com/hashicorp/hcp-setup-action) to download the HCP CLI. Finally, it uses the the HCP CLI to read a secret from [HCP Vault Secrets](/hcp/docs/vault-secrets). + +```yaml +jobs: + job_id: + permissions: + contents: 'read' + id-token: 'write' + + steps: + - id: 'Authenticate to HCP' + - uses: 'hashicorp/hcp-auth-action@v0' + with: + workload_identity_provider: 'GitHub' + + - name: 'Download hcp CLI' + uses: 'hashicorp/hcp-setup-action@v0' + with: + version: 'latest' + + - name: 'Use hcp CLI to read a secret' + run: | + MY_SECRET=$(hcp vault-secrets secrets open \ + --app=cli --format=json foo | jq -r '.static_version.value') + echo "::add-mask::$MY_SECRET" + echo "MY_SECRET=$MY_SECRET" >> $GITHUB_ENV + ``` + +Refer to the [HCP CLI command reference](/hcp/docs/cli/commands) for more information on commands you can use in your GitHub Actions workflows. diff --git a/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/gitlab.mdx b/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/gitlab.mdx new file mode 100644 index 0000000000..d99b16d9b2 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/gitlab.mdx @@ -0,0 +1,174 @@ +--- +page_title: Federate workload identity with GitLab +description: |- + Workload identity federation enables external workloads to access HCP services through an external identity provider. Learn how to configure the GitLab identity provider and the HCP platform so that external workloads can authenticate with the HCP identity service. +--- + +# Federate workload identity with GitLab + +This page describes how to set up workload identity federation to authenticate from a [GitLab CI/CD Pipeline](https://docs.gitlab.com/ee/ci/index.html) using a [GitLab ID token](https://docs.gitlab.com/ee/ci/secrets/id_token_authentication.html). Authenticated workloads can interact with HCP services without storing any HCP service principal keys. + +## Prerequisites + +You must complete the following steps before configuring a workload identity provider for GitLab: + +- You must have the `Admin` role on the HCP project. +- [Create a service principal in the desired project](/hcp/docs/hcp/iam/service-principal#project-level-service-principals) and grant it access to the HCP resources your workload requires. +- Have access to the GitLab CI/CD Pipeline to federate access to HCP from. +- Install the [HCP CLI](/hcp/docs/cli/install) or use the [HCP Terraform Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs) based on your desired configuration workflow. + +## Configure an external workload identity provider for GitLab + +To create an HCP workload identity provider for GitLab CI/CD, you need a conditional access statement. + +### Conditional access statement + +When federating workload identity, one of the requirements is a [conditional access statement](/hcp/docs/hcp/iam/service-principal/workload-identity-federation/conditional-access-statement). This statement is a boolean expression that has access to the identity claims and restricts which external identities are allowed. + +When exchanging an identity token for access to HCP, the conditional access statement has access to all the claims in the token with the `jwt_claims.` prefix. For more information, refer to the [token claim reference in the GitLab documentation](https://docs.gitlab.com/ee/ci/secrets/id_token_authentication.html#token-payload). + +The following example conditional access statement restricts access to HCP services to jobs that originate from `acme-group/acme-project` repository. + +```text +`jwt_claims.project_path == "acme-group/acme-project"` +``` + +To restrict access further so that only requests from the Git branch `main` are authenticated, you can add an additional selector and value. + +```text +`jwt_claims.sub == "project_path:acme-group/acme-project:ref_type:branch:ref:main"` +``` + +For more information about using selectors and values to create compound conditional access statements, refer to [conditional access statements](/hcp/docs/hcp/iam/service-principal/workload-identity-federation/conditional-access-statement). + +## Create the workload identity provider + +After you gather the information you need, create the workload identity provider for GitLab. + + + + +Use the [`hcp iam workload-identity-providers create-oidc` command](/hcp/docs/cli/commands/iam/workload-identity-providers/create-oidc). + +```shell-session +$ hcp iam workload-identity-providers create-oidc \ + --service-principal= \ + --issuer=https://gitlab.com \ + --conditional-access= \ + --description= +``` + +This command requires the following information that is specific to your HCP and GitLab accounts: + +- ``: The name of workload identity provider to create. +- ``:The service principal’s resource name, in the format `iam/project//service-principal/`. +- ``: The conditional access statement that restricts access to the specified repository and branch. +- ``: An optional description for the provider. + +The following example creates a workload identity provider named `gitlab-example`. This provider configures HCP to restrict access to GitLab by requiring that the JWT token identifies the action as originating from the `main` branch of `acme-group/acme-project`. Workloads that are granted access to HCP services use the `my-app-deployer` service principal. + +```shell-session +$ hcp iam workload-identity-providers create-oidc gitlab-example \ + --service-principal=iam/project/dcffbc8c-0873-4acc-bf96-4c79a4c3fd1a/service-principal/my-app-deployer \ + --issuer=https://gitlab.com \ + --conditional-access='jwt_claims.sub == "project_path:acme-group/acme-project:ref_type:branch:ref:main"' \ + --description="Allow acme-repo deploy job to access my-app-deployer service principal" +``` + + + + + +Use the [`hcp_iam_workload_identity_provider` resource](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/iam_workload_identity_provider). + +```hcl +# Replace with an existing service principal if created ahead of time. +resource "hcp_service_principal" "deployment_sp" { + name = "my-app-deployer" +} + +resource "hcp_iam_workload_identity_provider" "example" { + name = "gitlab-example" + service_principal = hcp_service_principal.deployment_sp.resource_name + description = "Allow acme-project deploy job to access my-app-runtime service principal" + + oidc { + issuer_uri = "https://gitlab.com" + } + + conditional_access = "" +} +``` + +This configuration requires the following information that is specific to your GitLab account: + +- ``: The conditional access statement that restricts access to the specified repository and branch. + +The following example creates a workload identity provider named `gitlab-example`. This provider configures HCP to restrict access to GitLab by requiring that the JWT token identifies the workload as originating from the `main` branch of `acme-group/acme-project`. Workloads that are granted access to HCP services use the `my-app-deployer` service principal. + +```hcl +# Replace with an existing service principal if created ahead of time. +resource "hcp_service_principal" "deployment_sp" { + name = "my-app-deployer" +} + +resource "hcp_iam_workload_identity_provider" "example" { + name = "gitlab-example" + service_principal = hcp_service_principal.deployment_sp.resource_name + description = "Allow acme-repo deploy workflow to access my-app-deployer service principal" + + oidc { + issuer_uri = "https://gitlab.com" + } + + conditional_access = "jwt_claims.sub == `project_path:acme-group/acme-project:ref_type:branch:ref:main`" +} +``` + + + + +## Configure the CI/CD job + +You can use the [HCP CLI](/hcp/docs/cli/install) to automatically retrieve external credentials and exchange them for an HCP access token. These instructions use the [`hashicorp/hcp` Docker container](https://hub.docker.com/r/hashicorp/hcphttps://hub.docker.com/r/hashicorp/hcp), but any runtime that has the HCP CLI installed can follow these steps. + +Add the following to your CI/CD job YAML configuration. + +```yaml +hcp: + image: hashicorp/hcp + id_tokens: + WORKLOAD_IDENTITY_TOKEN: + aud: + script: + - hcp iam workload-identity-providers create-cred-file + --output-file=creds.json + --source-env=WORKLOAD_IDENTITY_TOKEN + - hcp auth login --cred-file=creds.json + - MY_SECRET=$(hcp vault-secrets secrets open --app=my-app --format=json my-secret | + jq -r '.static_version.value') +``` + +This command requires the following information that is specific to your HCP account: + +- ``: The name of workload identity provider to exchange credentials with, in the format `iam/project//service-principal//workload-identity-provider/`. + +The following example creates a job named `hcp`. It configures the job to have an [ID token](https://docs.gitlab.com/ee/ci/yaml/#id_tokens) that identifies the `gitlab-test` workload identity provider in the `aud` claims. Then the job runs a script that creates a `credentials.json` file referencing the ID token. Finally, it uses the the HCP CLI to read a secret from [HCP Vault Secrets](/hcp/docs/vault-secrets) and store it in an environmental variable + +```yaml +hcp: + image: hashicorp/hcp + id_tokens: + WORKLOAD_IDENTITY_TOKEN: + aud: iam/project/dcffbc8c-0873-4acc-bf96-4c79a4c3fd1a/service-principal/my-app-deployer/workload-identity-provider/gitlab-test + script: + - hcp iam workload-identity-providers create-cred-file + iam/project/dcffbc8c-0873-4acc-bf96-4c79a4c3fd1a/service-principal/my-app-deployer/workload-identity-provider/gitlab-test + --output-file=credentials.json + --source-env=WORKLOAD_IDENTITY_TOKEN + - hcp auth login --cred-file=credentials.json + - MY_SECRET=$(hcp vault-secrets secrets open --app=my-app --format=json my-secret | + jq -r '.static_version.value') +``` + +Refer to the [HCP CLI command reference](/hcp/docs/cli/commands) for more information on commands you can use in your CI/CD pipelines. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/oidc.mdx b/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/oidc.mdx new file mode 100644 index 0000000000..34bf9cdf1f --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/oidc.mdx @@ -0,0 +1,183 @@ +--- +page_title: Federate workload identity with other OIDC providers +description: |- + Workload identity federation enables external workloads to access HCP services through an external identity provider. Learn how to configure an OIDC provider and the HCP platform so that external workloads can authenticate with the HCP identity service. +--- + +# Federate workload identity with other OIDC providers + +This page describes how to set up workload identity federation to authenticate using an identity token from an identity provider (IdP). Authenticated workloads can interact with HCP services without storing any HCP service principal keys. + +## Prerequisites + +You must complete the following steps before configuring a workload identity provider: + +- You must have the `Admin` role on the HCP project. +- [Create a service principal in the desired project](/hcp/docs/hcp/iam/service-principal#project-level-service-principals) and grant it access to the HCP resources your workload requires. +- Install the [HCP CLI](/hcp/docs/cli/install) or use the [HCP Terraform Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs) based on your desired configuration workflow. + +### Identity provider prerequisites + +The IdP you use must meet the following requirements: + +- Support OpenID Connect 1.0. +- Have a publicly accessible [OIDC metadata](https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata) and JWKS endpoint. The endpoint must be secured with SSL and TLS and begin with `https://`. + +## Configure an external workload identity provider + +To create an HCP workload identity provider for a custom OIDC IdP, you must provide the following configuration values: + +- The issuer URI +- The expected audience +- A conditional access statement + +### Audience + +By default, an OIDC workload identity provider verifies that the incoming token has an `aud` field equal to the resource name of the workload identity provider. This behavior ensures that the token was intended to be exchanged with HCP. + +The format for this value is: + +```text +`iam/project//service-principal//workload-identity-provider/` +``` + +The following example identifies `my_sp` as the HCP service principal and `oidc-example` as the name of the workload identity provider that uses the service principal. + +```text +`iam/project/dcffbc8c-0873-4acc-bf96-4c79a4c3fd1a/service-principal/my-sp/workload-identity-provider/oidc-example`. +``` + +If you do not define an expected audience when you create the workload identity provider, ensure that the token the workload ultimately receives has its `aud` set to the formatted resource name. + +You can also configure the provider to expect a custom `aud` claim. + +### Conditional access statement + +When federating workload identity, one of the requirements is a [conditional access statement](/hcp/docs/hcp/iam/service-principal/workload-identity-federation/conditional-access-statement). This statement is a boolean expression that has access to the identity claims and restricts which external identities are allowed. + +When exchanging an identity token for access to HCP, the conditional access statement has access to all the claims in the token with the `jwt_claims.` prefix. The following example lists the fields and names for a JWT token and then describes conditional access statements that use the information from this token. + +```json +{ + "jti": "example-id", + "sub": "env:prod::namespace:my-namespace::service:my-workload", + "namespace": "my-namespace", + "service": "my-workload", + "env": "prod", + "aud": "iam/project/dcffbc8c-0873-4acc-bf96-4c79a4c3fd1a/service-principal/my-sp/workload-identity-provider/oidc-example", + "iss": "https://custom-oidc-idp.com", + "nbf": 1632492967, + "exp": 1632493867, + "iat": 1632493567 +} +``` + +In the following example, HCP only allows access to workloads when the token's `sub` field is `“env:prod::namespace:my-namespace::service:my-workload”`. + +```text +`jwt_claims.sub == “env:prod::namespace:my-namespace::service:my-workload”` +``` + +In the following example, HCP allows workloads that have the service name `my-workload` and the namspace `my-namespace`, regardless of environment where the workload originates. + +```text +`jwt_claims.sub matches “^env:.+::namespace:my-namespace::service:my-workload$”` +``` + +In the following example, HCP allows workloads in `my-namespace` namespace of the `prod` enviornment. + +```text +`jwt_claims.env == “prod” and jwt_claims.namespace == “my-namespace”` +``` + +For more information about formatting conditional access statements, refer to [OIDC providers](/hcp/docs/hcp/iam/service-principal/workload-identity-federation/conditional-access-statement#oidc-providers). + +## Create the workload identity provider + +After you compile the information you need to create the workload identity provider, create the workload identity provider. + + + + +Use the [`hcp iam workload-identity-providers create-oidc` command](/hcp/docs/cli/commands/iam/workload-identity-providers/create-oidc): + +```shell-session +$ hcp iam workload-identity-providers create-oidc \ + --service-principal= \ + --issuer= \ + --conditional-access= \ + --allowed-audience= \ + --allowed-audiences= \ + --description= +``` + +This command requires the following information that is specific to your HCP and IdP accounts: + +- ``: The name of workload identity provider to create. +- ``:The service principal’s resource name, in the format `iam/project//service-principal/`. +- ``: The issuer URI for your IdP. Must start with `https://`. +- ``: The conditional access statement that restricts access to the specified workload. +- ``: Expected audience of the ID tokens. When you use the default expected audience, omit the flags. +- ``: An optional description for the provider. + + + + + +Use the [`hcp_iam_workload_identity_provider` resource](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/iam_workload_identity_provider). + +```hcl +# Replace with an existing service principal if created ahead of time. +resource "hcp_service_principal" "workload_sp" { + name = "my-app-runtime" +} + +resource "hcp_iam_workload_identity_provider" "example" { + name = "oidc-example" + service_principal = hcp_service_principal.workload_sp.resource_name + description = "Allow my-workload to act as my-app-runtime service principal" + + oidc { + issuer_uri = "" + + # If not using the default audience, configure up to 16 allowed audiences. + # allowed_audiences = ["", "", ..., "" ] + } + + conditional_access = "" +} +``` + +This configuration requires the following information that is specific to your IdP: + +- ``: The issuer URI for your IdP. Must start with `https://`. +- ``: Expected audience of the ID tokens when not using the default expected audience. +- ``: The conditional access statement that restricts access to the specified workload. + + + + +## Authenticate the workload's credentials + +You can use the [HCP CLI](/hcp/docs/cli/install), [HCP Terraform Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs), or [HCP Go SDK](https://github.com/hashicorp/hcp-sdk-go/tree/main) to automatically retrieve external credentials and exchange them for an HCP access token. This process uses a credential file that contains the information required to obtain external credentials and the workload identity provider to exchange them with. + +For more information, refer to [credential files](/hcp/docs/hcp/iam/service-principal/workload-identity-federation#credential-files). + +### Create the credential file for a custom IdP + +Use the [`hcp iam workload-identity-providers create-cred-file` command](https://developer.hashicorp.com/hcp/docs/cli/commands/iam/workload-identity-providers/create-cred-file). + +```shell-session +$ hcp iam workload-identity-providers create-cred-file \ + --source-[] + --output-file=credentials.json +``` + +This command requires the following information that is specific to your HCP account: + +- ``: The name of workload identity provider to exchange credentials with, in the format `iam/project//service-principal//workload-identity-provider/`. +- ``: Credential files can retrieve the workload’s token from an environment variable, a file, or URL. Select the appropriate flag and reference the command's documentation and examples. + +Ensure the credential file exists in the runtime environment. Because the credential file contains no secret values, you can store the credential file in the VM or container. Alternatively, you can generate it at runtime. + +To use the credential file to authenticate, you can use the HCP CLI, the HCP Terraform provider, or the HCP Go SDK. Refer to [use a credential file to authenticate](/hcp/docs/hcp/iam/service-principal/workload-identity-federation#use-a-credential-file-to-authenticate) for more information. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/index.mdx b/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/index.mdx new file mode 100644 index 0000000000..8f505acd0a --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/service-principal/workload-identity-federation/index.mdx @@ -0,0 +1,159 @@ +--- +page_title: Workload identity federation +description: |- + Workload identity federation enables external workloads to access HCP services through an external identity provider. Learn about workload identity federation, how it works, and how to use credential files. +--- + +# Workload identity federation + +This topic provides an overview for workload identity federation on the HashiCorp Cloud Platform (HCP). + +## Introduction + +Configure a workload identity provider when you want to create a trust relationship between HashiCorp Cloud Platform (HCP) and an external identity provider. This trust relationship is called _federation_. External workloads can leverage identity federation to exchange an external identity token for an HCP access token. Use workload identity federation to exchange workload identity tokens for HCP service principal access tokens without storing service principal credentials with the workload. + +HCP supports workload identity federation through the [HCP CLI](/hcp/docs/cli/install) and the [HCP Terraform Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs) for the following identity providers: + +- [AWS](/hcp/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/aws) +- [Azure](/hcp/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/azure) +- [GCP](/hcp/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/gcp) +- [GitHub](/hcp/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/github) +- [GitLab](/hcp/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/gitlab) +- [Other OIDC providers](/hcp/docs/hcp/iam/service-principal/workload-identity-federation/configure-provider/oidc) + +## Why use workload identity federation? + +The HCP identity service must authenticate any workload, such as a service, script, or container-based application, before the workload can access other HCP services. On HCP, workloads authenticate as a [service principal](/hcp/docs/hcp/iam/service-principal), typically using [service principal keys](/hcp/docs/hcp/iam/service-principal#keys). These credentials pose a security risk as anyone with access to them can authenticate with HCP. You must store them securely, rotate them regularly, and carefully manage their distribution. + +With workload identity federation, external workloads authenticate to HCP without storing any secret keys. After you federate workload identity, your workload sends its trusted identity token to HCP and retrieves an HCP access token when it needs to interact with HCP services. This process eliminates the need to manually manage credentials and lowers the risk of leaked secrets from service principals. + +## How does workload identity federation work? + +Workload identity federation relies on the fact that many platforms provide workloads with an externally verifiable identity. Your workload can use this external identity to authenticate with HCP and receive a service principal token in exchange. + +First, configure HCP to trust the identity provider that minted the token. You need to tell HCP where to expect the identity from and how to verify that identity. HCP verifies the identity with a [conditional access statement](/hcp/docs/hcp/iam/service-principal/workload-identity-federation/conditional-access-statement), a boolean expression that asserts the identity's expected attributes. + +For AWS, the identity provider configuration requires you to specify the AWS Account ID that originates the tokens. You can then restrict access to your AWS account by configuring the conditional access statement to restrict exchanges to a specific IAM Role. For example, the following conditional access statement restricts access to workloads running with the IAM Role named "example-role": `aws.arn matches ^arn:aws:sts::123456789012:assumed-role/example-role`. + +For OIDC providers, the identity provider configuration requires the Issuer URI, which tells HCP how to verify the token's validity. You may configure the `aud` of the token, but it must match what HCP expects, which by default is the workload identity provider’s resource name. HCP uses the conditional access statement to restrict access to the correct upstream identity. For example, when using GitHub Actions, the following conditional access statement allows the `deploy` workload in the GitHub repo `my-org/my-repo` to exchange its GitHub Actions identity for the service principal access token: `jwt_claims.repository == "my-org/my-repo" and jwt_claims.workflow == “deploy”`. + +The request flow for the token exchange occurs in the following order: + +![Workload identity federation diagram](/img/docs/hcp-core/workload-identity-federation-light.png#light-theme-only) + +![Workload identity federation diagram](/img/docs/hcp-core/workload-identity-federation-dark.png#dark-theme-only) + +1. The external workload requests a token from the external workload identity provider. +1. The external workload identity provider issues a token to the external workload. +1. The external workload requests an HCP access token from the HCP identity service and sends the external token for authentication. +1. The HCP identity service validates the external token. +1. The HCP identity service validates the conditional access statement. +1. The HCP identity service issues an access token to the external workload. +1. The external workload accesses the HCP services, authenticated as a service principal. + +You must configure workload providers with [project-level service principals](/hcp/docs/hcp/iam/service-principal#project-level-service-principals). If you do not have project-level service principals, you must create one before you can configure the external identity provider. + +## Credential files + +Tools such as the [HCP CLI](/hcp/docs/cli/install), the [HCP Terraform Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs), and [the Go SDK](https://github.com/hashicorp/hcp-sdk-go) must authenticate with HCP before interacting with it. Credential files are a way to authenticate with HCP when using either service principal keys or external credentials. + +HCP attempts to discover a credential file in the following order: + +1. `HCP_CRED_FILE` environment variable. The value should be the file path to the credential file, formatted as `/path/to/cred_file.json`. +1. The default credential file location, which is `~/.config/hcp/cred_file.json`. + +To create a credential file with a service principal key, run the [`hcp iam service-principals key create` command with the `--output-cred-file` flag](/hcp/docs/cli/commands/iam/service-principals/keys/create). + +To create a credential file for an external workload to authenticate using workload identity federation, use the [`hcp iam workload-identity-providers create-cred-file` command](/hcp/docs/cli/commands/iam/workload-identity-providers/create-cred-file). + +### Use a credential file to authenticate + +HCP automatically uses the credential file when detected. Set the environment variable `HCP_CRED_FILE` and point it to the credential configuration file. + +```shell-session +export HCP_CRED_FILE=/path/to/credentials.json +``` + +Alternatively, supporting tools can be configured explicitly. + + + + + +Use [`hcp auth login` command](/hcp/docs/cli/commands/auth/login) to authenticate using the credential file. + +```shell-session +hcp auth login --cred-file=/path/to/credentials.json +``` + + + + + +Configure the [HCP Terraform Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs) to use a credential file. + +```hcl +// Pin the version +terraform { + required_providers { + hcp = { + source = "hashicorp/hcp" + + // Replace with desired version + version = "~> 0.93.0" + } + } +} + +// Configure the provider +provider "hcp" { + credential_file = "/path/to/credentials.json" +} +``` + + + + + +Configure the [HCP Go SDK](https://github.com/hashicorp/hcp-sdk-go/tree/main) to use a credential file. + +```go +package main + +import ( + "log" + + vs "github.com/hashicorp/hcp-sdk-go/clients/cloud-vault-secrets/stable/2023-06-13/client/secret_service" + "github.com/hashicorp/hcp-sdk-go/config" + "github.com/hashicorp/hcp-sdk-go/httpclient" +) + +func main() { + // Construct HCP config + hcpConfig, err := config.NewHCPConfig( + config.WithCredentialFilePath("/path/to/credentials.json"), + ) + if err != nil { + log.Fatal(err) + } + + // Construct HTTP client config + httpclientConfig := httpclient.Config{ + HCPConfig: hcpConfig, + } + + // Initialize SDK http client + cl, err := httpclient.New(httpclientConfig) + if err != nil { + log.Fatal(err) + } + + // Import versioned client for the desired service. + vsClient := vs.New(cl, nil) + + // Use the client + // resp, err := vsClient.OpenAppSecret(...) +``` + + + \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/iam/sso/default-role.mdx b/content/hcp-docs/content/docs/hcp/iam/sso/default-role.mdx new file mode 100644 index 0000000000..ee6b264844 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/sso/default-role.mdx @@ -0,0 +1,24 @@ +--- +page_title: Assign a default role for single sign-on (SSO) +description: |- + You can assign a default role for an IdP's SSO integration with HCP to scope permissions for users. +--- + +# Assign a default role for single sign-on (SSO) + +This page describes the processes to add a default role for HCP SSO. + +## Assign a default role + +To streamline permission management, we recommend that you set a default organization role for users. Admins can decide whether to assign a default organization role or not. For least-privileged purposes, users signing up when there is no default organization role have a limited experience within the platform until an organization or project admin assigns an organization-scoped or project-scoped role. Learn more about [HCP permissions](/hcp/docs/hcp/iam/users#user-permissions). + +### Post-Single Sign-On (SSO) Connection + +Once Single Sign-On (SSO) is configured, if you log out of your account and attempt to sign in using your SSO credentials, you will be assigned the default organization role, resulting in the loss of your current admin capabilities. For effective administration, ensure there is an existing admin/owner logged in to the organization. This admin can modify permissions for SSO users, including yourself, once they’ve logged in. + +The administrator who owns the organization and enabled SSO can still [use their original](/hcp/docs/hcp/iam/sso#admins-and-owners), non-SSO account to sign in to [the HCP portal](https://portal.cloud.hashicorp.com/) and access the SSO-enabled organization. +
+ +**Setting the default organization role to admin:** When assigning the default role for the organization, opt for least privilege access, such as no organizational role or a viewer role. Exercise caution if assigning the default organization user role as "admin". + +**Users without an organization role:** Users without organization roles cannot view resources or edit anything inside the organization until project-level or workspace-level roles are assigned to them after their first login as a SSO user. For a complete list of details see [access management](/hcp/docs/hcp/iam/access-management#organization) . diff --git a/content/hcp-docs/content/docs/hcp/iam/sso/index.mdx b/content/hcp-docs/content/docs/hcp/iam/sso/index.mdx new file mode 100644 index 0000000000..8b2735dbbd --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/sso/index.mdx @@ -0,0 +1,82 @@ +--- +page_title: Single sign-on (SSO) overview +description: |- + HCP supports SAML and OIDC single sign-on (SSO) integrations with popular identity providers. Learn how SSO with HCP works and find guidance to help you enable it. +--- + +# Single sign-on (SSO) overview + +This topic provides an overview for single sign-on (SSO) with your preferred identity provider when users in your organization sign in to the HashiCorp Cloud Platform (HCP). To use HCP's SSO features, sign in to [the HCP Portal](https://portal.cloud.hashicorp.com). + +## Introduction + +HashiCorp Cloud Platform (HCP) allows organizations to configure both SAML 2.0 SSO and OpenID Connect (OIDC) SSO as an alternative to traditional user management with GitHub and email-based options. This security measures can help mitigate Account Take Over (ATO) attacks, provide a universal source of truth to federate identities from your identity provider (IdP), and help you better manage user access to your organization. + +### SAML and OIDC + +_Security Assertion Markup Language (SAML)_ is an XML-based open standard for exchanging authentication and authorization data between parties. + +_OpenID Connect (OIDC)_ is an authentication protocol based on the OAuth 2.0 framework that enables sign-in flows through a RESTful API and JSON payloads. + +HCP supports both SAML and OIDC for SSO. + +## Supported identity providers + +HCP supports SSO integrations with the following identity providers. + +| Identity provider | SAML documentation | OIDC documentation | +| :---------------- | :--------------------- | :----------------- | +| Okta | [Add a private SSO integration](https://developer.okta.com/docs/guides/add-private-app/saml2/main/) | [Add a private SSO integration](https://developer.okta.com/docs/guides/add-private-app/openidconnect/main/)| +| AzureAD | [Enable single sign-on for an enterprise application](https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/add-application-portal-setup-sso) | [Add an OpenID Connect-based single sign-on application (OIDC)](https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/add-application-portal-setup-oidc-sso) | +| Google Cloud | [Set up SSO for your organization](https://cloud.google.com/identity-platform/docs/web/saml) | [Signing in users with OIDC](https://cloud.google.com/identity-platform/docs/web/oidc) | +| AWS | [Set up single sign-on access to your applications](https://docs.aws.amazon.com/singlesignon/latest/userguide/set-up-single-sign-on-access-to-applications.html) | [Create an OpenID Connect (OIDC) identity provider in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html) | +| Auth0 | [Manually configure Auth0 SSO integrations](https://auth0.com/docs/authenticate/single-sign-on/outbound-single-sign-on/configure-auth0-saml-identity-provider#manually-configure-sso-integrations) | [Adopt OIDC-Conformant Authentication](https://auth0.com/docs/authenticate/login/oidc-conformant-authentication) | +| JumpCloud | [SAML Single Sign-on](https://jumpcloud.com/support/get-started-applications-saml-sso#using-sso-applications-with-jumpcloud) | [SSO with OIDC](https://jumpcloud.com/support/sso-with-oidc) | +| PingID | [Add a SAML application](https://docs.pingidentity.com/pingone/pingone_tutorials/p1_p1tutorial_add_a_saml_app.html) | [Adding an identity provider - OIDC](https://docs.pingidentity.com/pingone/integrations/p1_add_idp_oidc.html) | +| CyberArk | [Federate with an external IdP using SAML](https://docs.cyberark.com/wpm/latest/en/content/coreservices/usersroles/partneradd.htm?TocPath=Setup%7CAdd%20Users%7CSet%20up%20federation%20with%20external%20identity%20providers%7CFederate%20with%20an%20external%20IdP%20using%20SAML%7C_____0) | [Federate with an external IdP using OIDC](https://docs.cyberark.com/wpm/latest/en/content/coreservices/usersroles/oidcexternalidp.htm?TocPath=Setup%7CAdd%20Users%7CSet%20up%20federation%20with%20external%20identity%20providers%7CFederate%20with%20an%20external%20IdP%20using%20OIDC%7C_____0) | +| Duo Security | [Duo Single Sign-On for Generic SAML Service Providers](https://duo.com/docs/sso-generic) | [Duo Single Sign-On for Generic OpenID Connect (OIDC) Relying Parties](https://duo.com/docs/sso-oidc-generic) | +| One Login | [Advanced SAML Custom Connector](https://onelogin.service-now.com/support?id=kb_article&sys_id=8a1f3d501b392510c12a41d5ec4bcbcc) | [Adding & Configuring an OIDC Application](https://onelogin.service-now.com/support?id=kb_article&sys_id=c690686d8749c210f7b8a7dd3fbb35b2#mcetoc_1g9fdscsk67) | + +## SAML attributes + +When you set up your identity provider, these are the SAML attributes you use: + +| Instructions | SAML Attribute | Map to your identity provider | +| :----------- | :---------------------------------------------------------------- | :---------------------------- | +| Required | NameID | User’s email | +| Optional | `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname` | User's first name | +| Optional | `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname` | User's last name | +| Optional | `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name`, or
`http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` | Internal identity for the user that never changes. Do not use the user's email address for this ID. | + +## Workflow + +The process to enable SSO for an HCP organization consists of the following steps. + +1. If necessary, verify your domain with HCP. +1. Initiate SSO creation on HCP. +1. Continue configuration with your preferred identity provider. +1. Add information from your identity provider to HCP. +1. [Assign a default role](/hcp/docs/hcp/iam/sso/default-role) to users. + +After you enable SSO, you can manage, update, and delete your SSO from HCP. For more information, refer to [manage SSO for your organization](/hcp/docs/hcp/iam/sso/manage). + +## SSO integration with HCP Terraform + +If you signed up for HCP Terraform with an existing HCP account, you may encounter an error when you attempt to use SSO to sign in to HCP Terraform. + +HCP Terraform’s SSO requires a login with both an email and password in order to map to an SSO identity. As a result, users who sign up for HCP Terraform using an existing HCP account cannot set up a proper identity for SSO. + +## Guidance + +The following HashiCorp resources are available to help you use HCP’s single sign-on features. + +### Usage documentation + +- [Set up SAML for SSO](/hcp/docs/hcp/iam/sso/setup/saml) +- [Set up OIDC SSO](/hcp/docs/hcp/iam/sso/setup/oidc) +- [Assign a default role](/hcp/docs/hcp/iam/sso/default-role) +- [Manage SSO for your organization](/hcp/docs/hcp/iam/sso/manage) + +### Troubleshooting + +- [Troubleshoot HCP single sign-on](/hcp/docs/hcp/iam/sso/troubleshoot) diff --git a/content/hcp-docs/content/docs/hcp/iam/sso/manage.mdx b/content/hcp-docs/content/docs/hcp/iam/sso/manage.mdx new file mode 100644 index 0000000000..cb4427f8da --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/sso/manage.mdx @@ -0,0 +1,62 @@ +--- +page_title: Manage SSO for your HCP organization +description: |- + You can use HCP to manage single sign-on (SSO) for an organization. Learn how to manage, update, and delete existing SSO integrations on HCP. +--- + +# Manage SSO for your HCP organization + +This page describes the processes to manage SSO configurations for an HCP organization, including how to update and delete an existing SSO configuration. + +## Manage an HCP Organization with SSO enabled + +Organization owners and admins can configure SSO. The **Single Sign-On** page in **Settings** displays a summary of the current SSO configuration. + +### Users + +When you enable SSO for an organization, the user invitations feature is no longer offered. You must provision new users through the external identity provider. + +User accounts that join through SSO are limited to that one organization, cannot be associated with an existing personal account such as GitHub or email, and cannot be invited to other organizations within HCP. +After you provision a new user, HCP grants them the default [role](/hcp/docs/hcp/iam/users#user-permissions) you selected when configuring SSO for your organization. +An HCP administrator can then manually update and increase their user permissions on the [HCP Access Control](https://portal.cloud.hashicorp.com/access/users) page. + +Existing personal user accounts can still access the organization unless an administrator removes them. Existing SAML user accounts with emails matching the configured SSO domain must log in with the SSO URL link. This link is available on the **Single Sign-On** page in **Settings**. + +It is important to delete SSO accounts for users that were removed from your identity provider and ensure that any permissions/tokens granted to them by an HCP product is also removed. + +### Admins and owners + +The administrator who owns the organization and enabled SSO can still use their original, non-SSO account to sign in to the HCP web portal and access the SSO-enabled organization. If they previously signed in through GitHub, they can continue doing so. If they signed in with an email and password, they can use a special [force email + password sign-in](https://portal.cloud.hashicorp.com/sign-in?with=email) link. This is because the login page defaults to SSO and hides the password when an email matches the configured SSO domain. + +The organization owner can also sign up with a new SSO user principal and promote themselves to **Admin** if appropriate. However, they cannot remove their old user account or transfer ownership. They can use them as a recovery option if the SSO configuration requires troubleshooting. + +## Update SSO + +Organization owners and admins can edit an SSO configuration. + +To edit SSO: + +1. Click **Settings** and then click **SSO**. You will be redirected to the **Single Sign-On** page. +1. Open the **Manage** menu and select **Edit**. Users can modify the list of domains, the public signing certificate, endpoints, and the default organization role. + +Users can add and remove domains, but domains cannot be empty. + +- Adding a new domain will allow users with an email address matching the domain to sign up as new SSO users. SSO users using email addresses for the other domains will not be affected. You must also provision new domains on your identity provider and configure them for the Auth0-SSO-Connection. +- Removing an existing domain will affect SSO users whose email addresses match the removed domain. They can sign in through other methods but will become different users in the database. Organization administrators can remove inactive users from the organization. + +## Delete SSO + +Organization owners and admins can delete an SSO configuration from their organization. + + + +When you delete an SSO configuration, no SSO user can sign in to HCP. Current SSO users will remain in the organization as **inactive**. + + + +To delete SSO from an organization: + +1. Select **Delete SSO Configuration** in the **Manage** menu. A dialog appears for you to confirm the deletion of SSO from this organization. +1. Type **DELETE** and then click **Delete**. + +After deletion, organization owners and admins can [re-invite users](/hcp/docs/hcp/iam/users#invite-users) with the default Access Controls (IAM) system. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/iam/sso/setup/oidc.mdx b/content/hcp-docs/content/docs/hcp/iam/sso/setup/oidc.mdx new file mode 100644 index 0000000000..d0f94532c0 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/sso/setup/oidc.mdx @@ -0,0 +1,137 @@ +--- +page_title: Set up OIDC SSO +description: |- + HCP supports OIDC single sign-on (SSO) with a numbers of identity providers. Learn how to enable OIDC SSO so that users can sign in to an HCP organization with existing credentials. +--- + +# Set up OIDC for SSO + +This page describes the process to set up Okta integration for HCP single sign-on. You can configure HCP for OIDC SSO with the following identity providers: + +- Auth0 +- AWS +- Azure Entra ID +- CyberArk +- Duo Security +- Google Cloud +- JumpCloud +- Okta +- One Login +- PingID + +## Prerequisites + +@include 'requirements/sso.mdx' + +@include 'hcp-administration/verify-domain.mdx' + +## Enable SSO for HCP + +After your domain is verified, you can set up OIDC SSO with your preferred identity provider. + + + + + +### Configure Auth0 + +Follow the steps in the Auth0 documentation to [adopt OIDC-Conformant Authentication](https://auth0.com/docs/authenticate/login/oidc-conformant-authentication). + + + + + +### Configure AWS + +Follow the steps in the AWS documentation to [create an OpenID Connect (OIDC) identity provider in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html). + + + + + +### Configure Azure + +Follow the steps in the Azure documentation to [add an OpenID Connect-based single sign-on application](https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/add-application-portal-setup-oidc-sso). + + + + + +### Configure CyberArk + +Follow the steps in the CyberArk documentation to [federate with an external IdP using OIDC](https://docs.cyberark.com/wpm/latest/en/content/coreservices/usersroles/oidcexternalidp.htm?TocPath=Setup%7CAdd%20Users%7CSet%20up%20federation%20with%20external%20identity%20providers%7CFederate%20with%20an%20external%20IdP%20using%20OIDC%7C_____0). + + + + + +### Configure Duo Security + +Follow the steps in the Duo Security documentation to set up [Duo Single Sign-On for Generic OpenID Connect (OIDC) Relying Parties](https://duo.com/docs/sso-oidc-generic). + + + + + +### Configure Google Cloud + +Follow the steps in the Google Cloud documentation for [signing in users with OIDC](https://cloud.google.com/identity-platform/docs/web/oidc). + + + + + +### Configure JumpCloud + +Follow the steps in the JumpCloud documentation for [SSO with OIDC](https://jumpcloud.com/support/sso-with-oidc). + + + + + +### Configure Okta + +Follow the steps in the Okta documentation to [add a private SSO integration](https://developer.okta.com/docs/guides/add-private-app/openidconnect/main/). + + + + + +### Configure One Login + +Follow the steps in the One Login documentation for [Adding & Configuring an OIDC Application](https://onelogin.service-now.com/support?id=kb_article&sys_id=c690686d8749c210f7b8a7dd3fbb35b2#mcetoc_1g9fdscsk67). + + + + + +### Configure PingID + +Follow the steps in the PingID documentation to [Add an identity provider - OIDC](https://docs.pingidentity.com/pingone/integrations/p1_add_idp_oidc.html). + + + + +## Initiate integration on HCP + +1. [Log in to HCP](https://portal.cloud.hashicorp.com/) and go to your organization. +1. From your organization, click **Organization settings**. +1. Click **SSO**. Then click **Configure SSO for your organization**. +1. Select **OIDC**. +1. Enter the following values from your configured identity provider: + + - **Client ID** + - **Client Secret** + - **Issuer URL** + +## Complete SSO setup + +1. Assign a [default organization role](/hcp/docs/hcp/iam/sso/default-role) for users. +1. Optionally, turn on **Assign users an organization role**. +1. Click **Save**. + +Now users can sign in to your HCP organization using an existing identity provider. + +## Next steps + +@include 'next-steps/sso.mdx' \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/iam/sso/setup/saml.mdx b/content/hcp-docs/content/docs/hcp/iam/sso/setup/saml.mdx new file mode 100644 index 0000000000..a5870225aa --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/sso/setup/saml.mdx @@ -0,0 +1,206 @@ +--- +page_title: Set up SAML SSO +description: |- + HCP supports SAML single sign-on (SSO) with a numbers of identity providers. Learn how to enable SAML SSO so that users can sign in to an HCP organization with existing credentials. +--- + +# Set up SAML SSO + +This page describes the process to set up SAML integration for HCP single sign-on. You can configure HCP for either Okta OIDC or Okta SAML integrations. You can configure HCP for OIDC SSO with the following identity providers: + +- Auth0 +- AWS +- Azure Entra ID +- Cyberark +- Duo Security +- Google Cloud +- JumpCloud +- Okta +- One Login +- PingID + +## Prerequisites + +@include 'requirements/sso.mdx' + +@include 'hcp-administration/verify-domain.mdx' + +## Enable SAML SSO for HCP + +After your domain is verified, you can set up SAML SSO. + +### Initiate integration on HCP + +1. [Log in to HCP](https://portal.cloud.hashicorp.com/) and go to your organization. +1. From your organization, click **Organization settings**. +1. Click **SSO**. Then click **Configure SSO for your organization.** +1. Select **SAML**. +1. Copy the following values to enter into your identity provider. + + - **SSO Sign-On URL** + - **Entity ID** + - **Email Attribute Assertion Name** + +Open a new tab in your web browser to continue the configuration with your preferred identity provider. + + + + + +### Configure Auth0 + +Follow the steps to [manually configure Auth0 SSO integrations](https://auth0.com/docs/authenticate/single-sign-on/outbound-single-sign-on/configure-auth0-saml-identity-provider#manually-configure-sso-integrations). + +Enter the following values from HCP into your Auth0 environment. + +- **SSO Sign-On URL** +- **Entity ID** +- **Email Attribute Assertion Name** + + + + + +### Configure AWS + +Follow the steps to [set up single sign-on access to your applications](https://docs.aws.amazon.com/singlesignon/latest/userguide/set-up-single-sign-on-access-to-applications.html). + +Enter the following values from HCP into your AWS environment. + +- **SSO Sign-On URL** +- **Entity ID** +- **Email Attribute Assertion Name** + + + + + +### Configure Azure + +Follow the steps to [integrate a private application in the Azure documentation](https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/add-application-portal-setup-sso). + +Enter the following values from HCP into your Azure environment. + +- **SSO Sign-On URL** +- **Entity ID** +- **Email Attribute Assertion Name** + + + + + +### Configure CyberArk + +Follow the steps to [federate with an external IdP using SAML](https://docs.cyberark.com/wpm/latest/en/content/coreservices/usersroles/partneradd.htm?TocPath=Setup%7CAdd%20Users%7CSet%20up%20federation%20with%20external%20identity%20providers%7CFederate%20with%20an%20external%20IdP%20using%20SAML%7C_____0). + +Enter the following values from HCP into your CyberArk environment. + +- **SSO Sign-On URL** +- **Entity ID** +- **Email Attribute Assertion Name** + + + + + +### Configure Duo Security + +Follow the steps for [Duo Single Sign-On for Generic SAML Service Providers](https://duo.com/docs/sso-generic) + +Enter the following values from HCP into your Duo Security environment. + +- **SSO Sign-On URL** +- **Entity ID** +- **Email Attribute Assertion Name** + + + + + +### Configure Google + +Follow the steps to [integrate a private application in the Google Cloud documentation](https://cloud.google.com/identity-platform/docs/web/saml). + +Enter the following values from HCP into your Google Cloud environment. + +- **SSO Sign-On URL** +- **Entity ID** +- **Email Attribute Assertion Name** + + + + + +### Configure JumpCloud + +Follow the steps to [Get Started: SAML Single Sign-On (SSO) in the JumpCloud documentation](https://jumpcloud.com/support/get-started-applications-saml-sso#using-sso-applications-with-jumpcloud). + +Enter the following values from HCP into your JumpCloud environment. + +- **SSO Sign-On URL** +- **Entity ID** +- **Email Attribute Assertion Name** + + + + + +### Configure Okta + +Follow the steps to [integrate a private application in the Okta documentation](https://developer.okta.com/docs/guides/add-private-app/saml2/main/). + +Enter the following values from HCP into your Okta environment. + +- **SSO Sign-On URL** +- **Entity ID** +- **Email Attribute Assertion Name** + + + + + +### Configure One Login + +Follow the steps to set up the [Advanced SAML Custom Connector](https://onelogin.service-now.com/support?id=kb_article&sys_id=8a1f3d501b392510c12a41d5ec4bcbcc). + +Enter the following values from HCP into your One Login environment. + +- **SSO Sign-On URL** +- **Entity ID** +- **Email Attribute Assertion Name** + + + + + +### Configure PingID + +Follow the steps to [add a SAML application](https://docs.pingidentity.com/pingone/pingone_tutorials/p1_p1tutorial_add_a_saml_app.html). + +Enter the following values from HCP into your PingID environment. + +- **SSO Sign-On URL** +- **Entity ID** +- **Email Attribute Assertion Name** + + + + +### Continue integration on HCP + +Return to HCP. Enter the following information from your identity provider. + +- **SAML IDP Single Sign-On URL** +- **SAML IDP Certificate** + +## Complete SSO setup + +1. Assign a [default organization role](/hcp/docs/hcp/iam/sso/default-role) for users. +1. Optionally, turn on **Assign users an organization role**. +1. Click **Save**. + +Now users can sign in to your HCP organization using an existing identity provider. + +## Next steps + +@include 'next-steps/sso.mdx' \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/iam/sso/troubleshoot.mdx b/content/hcp-docs/content/docs/hcp/iam/sso/troubleshoot.mdx new file mode 100644 index 0000000000..d08ef3527b --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/sso/troubleshoot.mdx @@ -0,0 +1,140 @@ +--- +page_title: Troubleshooting single sign-on (SSO) on HCP +description: |- + Learn about the errors you may receive when setting up SSO on HCP and how to resolve them. +--- + +# Troubleshooting single sign-on (SSO) on HCP + +This page describes the troubleshooting process when setting up a preferred identity provider for SSO on HCP. + +## Overview + +When enabling SSO for HCP, errors issues belong to one of two categories: + +- _Errors in HCP setup_ are within our support scope. +- _Errors in IdP setup_ are outside the scope of our support. Refer to your identity provider’s documentation for more information and additional troubleshooting resources. + +## Common error messages + +- [`Access was denied while authenticating`](#access-was-denied-while-authenticating) +- [`An error occurred with authentication.`](#an-error-occurred-with-authentication) +- [`invalid_request: IdP-Initiated login is not enabled`](#invalid_request-idp-initiated-login-is-not-enabled) +- [`Not recognized as a verified TXT record`](#not-recognized-as-a-verified-txt-record) +- [`Something went wrong.` (OIDC)](#something-went-wrong-oidc) +- [`Something went wrong.` (SAML)](#something-went-wrong-saml) +- [`Unable to proceed with request`](#unable-to-proceed-with-request) + +### Access was denied while authenticating + +**Error message**: `Access was denied while authenticating.` + +**Cause**: This error can occur if you have a misconfigured Entity ID in SAML SSO that does not match the assigned Entity ID in HCP. + +**Solutions**: Ensure the configured Entity ID matches the one in HCP. + +### An error occurred with authentication + +**Error message**: `An error occurred with authentication.` + +**Cause**: This error indicates an issue verifying a token or claim request against user metadata. + +**Solutions**: Follow the instructions in the error page, which provides additional information about the error. + +### invalid_request: IdP-Initiated login is not enabled + +**Error message**: `invalid_request: IdP-Initiated login is not enabled.` + +**Cause**: Currently, the HCP SSO integration requires you to log in directly from the HCP UI with your SSO credentials. It is currently expected to receive an error similar to the one below if you try to log into HCP directly from your SSO platform. + + + +```plain-text +invalid_request: IdP-Initiated login is not enabled for connection "HCP-SSO-11eb58f9-5983-1701-8c33-0242ac110016-samlp". +TRACKING ID: 1ee9c265894f363dd226 +``` + + + +You may also receive an `Oops!, something went wrong` message as well. + +**Solutions**: Log into HCP directly from the HCP Portal with your SSO credentials. + +#### Okta SAML workaround + +As an alternative workaround, you can use the bookmark app in Okta to have a tile that will mimic the IdP-initiated from HCP. The URL that can be used is `https://portal.cloud.hashicorp.com/login/signin?conn-id=HCP-SSO--samlp`. + +You should replace the `ORGID` with the actual organization ID, which can be found under **Organization Settings** in the HCP Portal. + +#### Microsoft Entra ID (Azure AD) SAML workaround + +As an alternative workaround, you can set the "Sign-on URL" under the Basic SAML Configuration settings. The URL that can be used is `https://portal.cloud.hashicorp.com/login/signin?conn-id=HCP-SSO--samlp`. + +You should replace the `ORGID` with the actual organization ID which can be found under **Organization Settings** in the HCP Portal. + +### Not recognized as a verified TXT record + +**Error message**: `Not recognized as a verified TXT record.` + +**Cause**: This can be caused if the appropriate TXT record was not created or if it was incorrectly configured within your domain host. + +**Solutions**: Add the TXT record to your domain host or edit the existing TXT record to match the expected format. + +- You can validate whether you see a TXT entry for HCP using either the `dig` or `host` command. + + ```shell-session + $ dig -t txt domain.com + ``` + + ```shell-session + $ host -t TXT domain.com + ``` + +- Alternatively, you can use this website to validate the TXT records. + +The expectation is you should see a TXT record with a value which follows a format similar to `hcp-domain-verification=c886c6010596fb39XXXX18bd80c77073b3584` when querying your domain's TXT records. + +After the domain is successfully verified, you can continue with the rest of the SSO setup steps. + +### Something went wrong (OIDC) + +**Error message**: `Something went wrong.` + +This error redirects you to the HCP portal. + +**Cause**: The issuer URL must have the following pattern: `https:///` + +**Solutions**: Add the trailing slash to the URL, or adjust as necessary to match the pattern. + +### Something went wrong (SAML) + +**Error message**: `Something went wrong.` + +**Cause**: Either the user provided an invalid certificate to validate the SAML Response signature, or the user did not set the email as an attribute assertion name on the upstream IDP. + +**Solutions**: To check the certificate, you can usually obtain the required information from the identity provider's `metadata.xml`. Identity providers usually provide an endpoint to download this file. + +For more information on attribute assertion, refer to the list of [supported SAML attributes](/hcp/docs/hcp/iam/sso#saml-attributes). + +### Unable to proceed with request + +**Error message**: `Unable to proceed with the request.` + +**Cause**: This error is caused by a misconfigured certificate or a invalid ACS URL. + +**Solutions**: + +- Mismatch of the certificates. Please make sure the certificates match. Check to make sure that there is not an extra space at the end or any extra character added while setting up the certificate. +- Invalid ACS URL. While we provide a "SSO Sign-On URL" in the "Initiate SAML Integration" instructions, some IdPs receive the request at a path which omits the `?connection=HCP-SSO--saml` argument. Try to use this URL instead for your ACS URL in your IdP settings: + + ```plain-text + https://auth.hashicorp.com/login/callback + ``` + +## Reused domains + +You need a DNS record (secret value to set as TXT) to prove ownership of a domain. HCP uses the domain to match the email addresses for SSO. You must use different SSO domains for each HCP organization. If you try to reuse a domain name, the DNS connection request will fail. + +## Support + +If you experience other issues when enabling SSO for your HCP organization, refer to the [HashiCorp Help Center](https://support.hashicorp.com/hc/en-us/categories/4404266838931-HCP) or contact our Support team. diff --git a/content/hcp-docs/content/docs/hcp/iam/users.mdx b/content/hcp-docs/content/docs/hcp/iam/users.mdx new file mode 100644 index 0000000000..05ee4b9be5 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/iam/users.mdx @@ -0,0 +1,34 @@ +--- +page_title: Users +description: |- + This topic describes how to create HashiCorp Cloud Platform (HCP) users. +--- + +# Users + +When you sign up for a HashiCorp Cloud Platform (HCP) account for the first +time, the HCP Portal takes you to the [create +organization](https://portal.cloud.hashicorp.com/orgs/create) page to set up +your organization. You can invite additional users to the organization so that +they can access the resources. + +This page describes how to add users to your HashiCorp Cloud Platform (HCP) account and manage their access to resources. + +## Invite users + +Use the following procedure to invite users into your organization using email. +[Organization admin role](#organization-role) is required to invite and manage +users. + +@include '/hcp-administration/invite-users.mdx' + +## Manage users + +@include '/hcp-administration/manage-users.mdx' + +## User permissions + +@include '/hcp-administration/permission-intro.mdx' + +## Access Management +For more information about permissions, the different types of roles and how they can be used within HCP, checkout the [Access Management](/hcp/docs/hcp/iam/access-management) page. diff --git a/content/hcp-docs/content/docs/hcp/index.mdx b/content/hcp-docs/content/docs/hcp/index.mdx new file mode 100644 index 0000000000..b38835e327 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/index.mdx @@ -0,0 +1,67 @@ +--- +page_title: What is HCP? +description: |- + HashiCorp Cloud Platform (HCP) is HashiCorp's first-party platform for hosting our products. +--- + +# What is HCP? + +HashiCorp Cloud Platform (HCP) is a fully-managed platform offering HashiCorp products-as-a-service. HCP removes the management overhead associated with deploying and maintaining HashiCorp products so that you can focus on reaping the products' benefits. + +HCP enables you to easily launch and operate Consul, Vault, and other HashiCorp services on a HashiCorp Virtual Network (HVN). An HVN connects to resources on your cloud infrastructure. Shared platform functionality such as log in, access control, and billing provide centralized account and organization management. You can manage HCP assets from the web portal interface or using the Terraform provider. + + + +Complete the following tutorials for step-by-step guidance on getting started: + +- [Get Started with HCP Consul](/consul/tutorials/get-started-hcp/hcp-gs-deploy) +- [Get Started with HCP Vault Dedicated](/vault/tutorials/cloud) +- [Get Started with HCP Packer](/packer/tutorials/hcp-get-started) +- [Get Started with HCP Boundary](/boundary/tutorials/hcp-getting-started) +- [Get Started with HCP Vault Secrets](/vault/tutorials/hcp-vault-secrets-get-started) + + + + +## How does HCP Work? + +The following diagram shows the basic workflow through both the HCP and HCP Terraform portals: + +![Diagram](/img/docs/hcp-arch-diagram.png) + +HashiCorp Cloud Platform (HCP) has two main planes for interacting with the platform. + +### Control Plane + +The _control plane_ refers to the systems that control your product deployments. +You can initiate operations such as user management, product deployment, as well as monitoring and maintenance operations. +You can initiate operations in [the HCP Portal](https://portal.cloud.hashicorp.com/) to interact with your deployed resources +(e.g., HashiCorp Virtual Network (HVN), Consul, Vault). + +### Data Plane + +The _data plane_ consists of your resource deployments on the cloud platforms you use and are managed by the HashiCorp SRE team. +HCP has one data plane for hosting multiple tenants, also called _organizations_. You can create as many organizations as necessary to meet your goals. +Organizations are isolated and secured from other organizations. + +The HCP data plane is hosted on each supported cloud provider. +Each component in the data plane is deployed into separately a managed Virtual Private Connection (VPC) on the host cloud. +The VPC is managed by HashiCorp but is unique to each user. Consul, Vault, and other assets are always separated into their own VPCs. +You can create as many additional VPCs as needed, but you must have at least one to deploy Consul or Vault. +HCP automatically handles the data plane when you create a new HVN. + +## Why HCP? + +HashiCorp Cloud Platform (HCP) services offer practitioners and organizations the fastest way to get started with HashiCorp’s tools. Use HCP to accelerate your time-to-value, and leave the day-to-day operational toil to HashiCorp SREs. HashiCorp’s enterprise products prioritize control over convenience. + +* **Push-button deployments**: Production-grade infrastructure, built-in security, and pay-as-you-go pricing accelerate cloud adoption. +* **One workflow across clouds**: HashiCorp’s centralized identity, policies, and virtual networks enable consistency and flexibility for your team. +* **Fully managed infrastructure**: HashiCorp experts manage, monitor, upgrade, and scale your clusters, to help increase productivity and reduce your costs. + +## Community + +Ask questions, make suggestions, and contribute to the community. + +* [Ask questions](https://discuss.hashicorp.com/c/hcp/54) in the official HashiCorp forum +* [Submit an issue](https://support.hashicorp.com/hc) for bugs and feature requests + diff --git a/content/hcp-docs/content/docs/hcp/network/hvn-aws/hvn-aws.mdx b/content/hcp-docs/content/docs/hcp/network/hvn-aws/hvn-aws.mdx new file mode 100644 index 0000000000..4715f71395 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/network/hvn-aws/hvn-aws.mdx @@ -0,0 +1,61 @@ +--- +page_title: Create and manage an HVN +description: |- + This topic describes how to create and manage a HashiCorp Virtual Network (HVN) for AWS. +--- + +# Create and manage an HVN + +You can create and manage a HashiCorp Virtual Network (HVN) for AWS. Use an HVN +to delegate an IPv4 CIDR range to HCP. The platform uses this CIDR range to +automatically create a virtual private cloud (VPC). + +## Specification + +- You can create one HVN for each available cloud region. + +- Resources added to an HVN appear in the HVN's cloud region. Deploying a + cluster into an HVN created in the us-east-1 region, for example, adds the + cluster to the us-east-1 region. + +- All HCP resources must be located in one HVN. A single product deployment + cannot span two different HVNs. + +- You cannot move product deployments from one HVN to another. + +- You cannot change HVNs after you deploy them. + +## Create an HVN + +@include '/hcp-network/create-hvn-aws.mdx' + +## Connect an HVN to AWS + +To connect your HashiCorp Virtual Network to your AWS infrastructure, you must first create either a peering connection or a transit gateway attachment. Then, specify traffic routes so that clusters can communicate with client resources. Individual configuration instructions are available: + +* [Peering Connections](/hcp/docs/hcp/network/hvn-aws/hvn-peering) +* [Transit Gateway Attachments](/hcp/docs/hcp/network/hvn-aws/tgw-attach) +* [Routes](/hcp/docs/hcp/network/hvn-aws/routes) +* [Security Groups](/hcp/docs/hcp/network/hvn-aws/security-groups) + +## Manage an HVN + +You cannot modify HVNs after you deploy them, but the following management features are available. + +### Import to Terraform + +HCP generates a command that you can copy and run to import and manage the HVN in Terraform: + +1. Sign in to [the HCP Portal](https://portal.cloud.hashicorp.com/) and select your organization. +1. From the sidebar, click **HashiCorp Virtual Network**. +1. Click on an HVN in the **ID** column. +1. From the **Manage** menu, copy the provided `terraform import` command. +1. Open your terminal and run the command. + +### Delete an HVN + +1. Sign in to [the HCP Portal](https://portal.cloud.hashicorp.com/) and select your organization. +1. From the sidebar, click **HashiCorp Virtual Network**. +1. Click on an HVN in the **ID** column. +1. From the **Manage** menu, click **Delete**. +1. When prompted, select **Confirm**. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/network/hvn-aws/hvn-peering.mdx b/content/hcp-docs/content/docs/hcp/network/hvn-aws/hvn-peering.mdx new file mode 100644 index 0000000000..53fc877a8b --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/network/hvn-aws/hvn-peering.mdx @@ -0,0 +1,58 @@ +--- +page_title: Peering connections +description: |- + This topic describes how to create a peering connection between HCP and your virtual private cloud (VPC) in AWS. +--- + +# Peering connections + +You can create a peering connection between HashiCorp Cloud Platform (HCP) and +your virtual private cloud (VPC) in AWS to allow traffic between services. + +## Overview + +HCP Consul Dedicated and HCP Vault Dedicated uses a peering connections to communicate with the clients +hosted in your AWS environment. + +You can create a peering connections from [the HCP Portal](https://portal.cloud.hashicorp.com/) or the +HCP provider in Terraform. For instructions on how to create peering connections +with Terraform, refer to the [HCP provider +documentation](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/aws_network_peering). + +For larger environments, we recommend connecting HCP to your VPCs through a [transit +gateway](/hcp/docs/hcp/network/hvn-aws/tgw-attach). + +## Requirements + +- An AWS account ID +- The ID of the VPC you wish to connect +- VPCs must be configured with [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918) or [RFC6598 specification](https://datatracker.ietf.org/doc/html/rfc6598) IP addresses. +- No active PrivateLink service is associated with the HVN + +## Create peering connections + +There are two methods to create a peering connection between the HCP HVN and AWS VPC - manual or automated. + +The automated method connects to your AWS account and launches a CloudFormation template to complete the peering configuration. +The CloudFormation template handles creating the peering request, accepting the peering request, and creating the +necessary routes between the HVN and VPC. + +The manual process will require you to perform each step in your HCP and AWS accounts. + + + + +@include '/hcp-network/aws-quick-peering.mdx' + + + + +@include '/hcp-network/aws-manual-peering-cli.mdx' + + + + +@include '/hcp-network/aws-manual-peering-ui.mdx' + + + \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/network/hvn-aws/routes.mdx b/content/hcp-docs/content/docs/hcp/network/hvn-aws/routes.mdx new file mode 100644 index 0000000000..8160d46228 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/network/hvn-aws/routes.mdx @@ -0,0 +1,57 @@ +--- +page_title: HVN routes +description: |- + This topic describes how to create routes in HCP. Routes are rules in the HashiCorp Virtual Network (HVN) route table that direct network traffic between the HVN and a target connection. +--- + +# Routes + +Routes are rules in the HashiCorp Virtual Network (HVN) route table that direct network traffic between the HVN and a target connection. + +## Introduction + +Routes are a necessary part of the HVN configuration. They provide a networking abstraction that enables network traffic between the HVN and a target HVN connection, such as a peering connnection or transit gateway attachment. + +Routes enable communication between the destination and all clusters in the HVN, including clusters created after the initial deployment. When you create a route, it is added to the route table of the HVN. HCP uses the route table to communicate with your cloud provider’s resources. + +Routes have two components for network traffic: +* The _destination_ is specified by the CIDR block of the resource you want to reach through your target. +* A _target_ is the HVN connection where traffic is routed, such as a peering connection. + +The ports available for use in route configuration depend on the type of cluster you connect. + +## Create a route + +@include '/hcp-network/create-aws-routes.mdx' + +To add more than one route to the table, repeat these steps as necessary. + +## Configure security groups + +After you configure a target connection and specify the routes for the HVN to connect to your VPC, you may need to configure security groups to open the virtual firewall between your HVN and cloud network. + +For information specific to HCP, refer to [Security Groups](/hcp/docs/hcp/network/hvn-aws/security-groups). + +### Route table reference + +Route tables in HCP include the following fields: + +* **ID**: The name the route was given. +* **Destination**: The destination CIDR block range configured in the route. +* **Target**: The name of the target. + * The value is the ID of the peering connection. + * When you click on the target, it opens the target’s configuration screen. +* **Status**: Shows if the route is active, pending, or failed. +* **Target type**: Indicates that the route connects either a peering connection or a transit gateway attachment. + +To delete a route entry, choose **Delete** from the ellipsis menu. When prompted, confirm that you want to remove the route. + +### CIDR block reference + +The following rules apply to CIDR blocks specified in the route configuration: + +* CIDR blocks must follow either the [RFC1918 specification](https://datatracker.ietf.org/doc/html/rfc1918) or the [RFC6598 specification](https://datatracker.ietf.org/doc/html/rfc6598). +- HCP does not accept publicly routable addresses because they could overlap with addresses of services used for HCP management and operations. +- CIDR blocks configured in the route cannot overlap with the parent HVN. +- Different routes in the HVN can specify the same CIDR blocks, but the route with the narrowest CIDR definition takes priority when routing network traffic. +- Routes cannot have a narrower CIDR definition than an existing route that targets a peering connection. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/network/hvn-aws/security-groups.mdx b/content/hcp-docs/content/docs/hcp/network/hvn-aws/security-groups.mdx new file mode 100644 index 0000000000..b523c54bef --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/network/hvn-aws/security-groups.mdx @@ -0,0 +1,93 @@ +--- +page_title: Security groups +description: |- + This topic describes the security group settings required to open the virtual firewall between your HVN and cloud network. +--- + +# Security groups + +You can configure security group settings to open the virtual firewall between your HVN and your AWS cloud network. + +## Overview + +A security group is an entity in AWS that functions as a virtual firewall between your AWS instances. Security groups manage protocol and port permissions for AWS traffic in order to control inbound and outbound traffic. For additional information, refer to the AWS documentation [Control traffic to resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html). + +To establish communication between your HashiCorp Virtual Network (HVN) and your Amazon VPC or Amazon transit gateway, you must: +* Create a security group. +* Configure _ingress_ (inbound) rules. +* Configure _egress_ (outbound) rules. + +To configure security group rules, you can use either the AWS console or the AWS Command Line Interface. + +-> **Tip**: Creating custom security group configurations for your HCP products improves infrastructure security. However, administrative flexibility may reduce over time as you introduce multiple service deployments. + +## Security group rules for HCP Consul Dedicated + +To allow traffic between your Consul cluster and AWS, specify ingress (inbound) and egress (outbound) rules on your Amazon VPC or Amazon transit gateway. + +### Ingress + +To allow inbound traffic from your HVN, specify the following rules on your Amazon VPC or Amazon transit gateway: + +| Protocol | From Port | To Port | Source | Description | +| -------- | :-------: | :-----: | :----------------------: | :-----------------------------------------: | +| TCP | 8301 | 8301 | HVN-CIDR | Used to handle gossip from server | +| UDP | 8301 | 8301 | HVN-CIDR | Used to handle gossip from server | +| TCP | 8301 | 8301 | Security group ID itself | Used to handle gossip between client agents | +| UDP | 8301 | 8301 | Security group ID itself | Used to handle gossip between client agents | + +To apply the ingress rules to your security group, you can issue the `authorize-security-group-ingress` command. Specify the following information in the command: + +* Target VPC region +* Security group ID +* CIDR block configured for your HVN + +```shell-session +$ aws ec2 --region \ + authorize-security-group-ingress -- --ip-permissions \ + IpProtocol=tcp,FromPort=8301,ToPort=8301,IpRanges='[{CidrIp=}]' \ + IpProtocol=udp,FromPort=8301,ToPort=8301,IpRanges='[{CidrIp=}]' \ + IpProtocol=tcp,FromPort=8301,ToPort=8301,UserIdGroupPairs='[{GroupId=}]' \ + IpProtocol=udp,FromPort=8301,ToPort=8301,UserIdGroupPairs='[{GroupId=}]' +``` +### Egress + +To allow outbound traffic from your VPC, specify the following rules on your Amazon VPC or Amazon transit gateway: + +| Protocol | From Port | To Port | Destination | Description | +| -------- | :-------: | :-----: | :----------------------: | :-----------------------------------------------: | +| TCP | 80 | 80 | HVN-CIDR | Consul API | +| TCP | 443 | 443 | HVN-CIDR | Consul API | +| TCP | 8300 | 8300 | HVN-CIDR | For RPC communication between clients and servers | +| TCP | 8301 | 8301 | HVN-CIDR | Used to gossip with server | +| UDP | 8301 | 8301 | HVN-CIDR | Used to gossip with server | +| TCP | 8301 | 8301 | Security group ID itself | Used to handle gossip between client agents | +| UDP | 8301 | 8301 | Security group ID itself | Used to handle gossip between client agents | +| TCP | 8502 | 8502 | HVN-CIDR | For gRPC communication to servers | + +To apply the egress rules to the security group, you can issue the `authorize-security-group-egress` command. Specify the following information in the command: + +* Target VPC region +* Security group ID +* CIDR block configured for your HVN + +```shell-session +$ aws ec2 --region \ + authorize-security-group-egress -- --ip-permissions \ + IpProtocol=tcp,FromPort=80,ToPort=80,IpRanges='[{CidrIp=}]' \ + IpProtocol=tcp,FromPort=443,ToPort=443,IpRanges='[{CidrIp=}]' \ + IpProtocol=tcp,FromPort=8300,ToPort=8300,IpRanges='[{CidrIp=}]' \ + IpProtocol=tcp,FromPort=8301,ToPort=8301,IpRanges='[{CidrIp=}]' \ + IpProtocol=udp,FromPort=8301,ToPort=8301,IpRanges='[{CidrIp=}]' \ + IpProtocol=tcp,FromPort=8301,ToPort=8301,UserIdGroupPairs='[{GroupId=}]' \ + IpProtocol=udp,FromPort=8301,ToPort=8301,UserIdGroupPairs='[{GroupId=}]' \ + IpProtocol=tcp,FromPort=8502,ToPort=8502,UserIdGroupPairs='[{GroupId=}]' +``` + +## Security group rules for HCP Vault Dedicated + +To allow traffic between your Vault cluster and AWS, specify egress (outbound) rules on your Amazon VPC or Amazon transit gateway. Ingress rules are not required to allow traffic from Vault clusters into your VPC or transit gateway. + +### Egress + +@include '/hcp-network/configure-aws-security-group.mdx' \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/network/hvn-aws/tgw-attach.mdx b/content/hcp-docs/content/docs/hcp/network/hvn-aws/tgw-attach.mdx new file mode 100644 index 0000000000..68ea7db0bb --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/network/hvn-aws/tgw-attach.mdx @@ -0,0 +1,68 @@ +--- +page_title: Transit Gateway Attachments +description: |- + This topic describes how to create transit gateway (TGW) attachments, which connect an HVN to an AWS transit gateway. +--- + +# Transit gateway attachments + +You can create transit gateway attachments to connect a HashiCorp Virtual Network (HVN) to an AWS transit gateway. +* A _transit gateway_ is an AWS component that acts as a network transit hub in your AWS environment. +* A _transit gateway attachment_ is a component in HCP that connects your HVN to a transit gateway in AWS. + +## Overview + +The following procedure describes how to connect clusters in HCP to resources deployed to AWS: +1. Connect one or more VPCs in your AWS network to your transit gateway. +1. Create a _resource share_ using the AWS Resource Access Manager. The transit gateway and resource share must exist in the same region as the HVN you want to connect to. +1. Create a transit gateway attachment in HCP. The platform identifies the shared resource using the Amazon Resource Name (ARN) and the transit gateway ID. +1. HCP initiates a request to AWS for access to the resources. You must approve the attachment request in the AWS account before HCP can route traffic through the transit gateway. +1. Configure routes to direct traffic between the transit gateway attachment and the transit gateway. + +You can create a transit gateway attachment in HCP or you can use the HCP Terraform provider. For instructions on how to create transit gateway attachments with Terraform, refer to the [HCP provider documentation](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/aws_transit_gateway_attachment). + +## Requirements + +Before you create a transit gateway attachment, you need the following information: + +* AWS account ID +* AWS transit gateway ID +* ARN of the resource share in AWS +* No active PrivateLink service is associated with the HVN + +The HCP interface provides links and other onscreen assistance to help you find this information. For additional details on where to find this information, refer to the AWS Documentation. + +## Create a transit gateway attachment + +The HCP interface provides guided steps to help you transit gateway attachments. You can follow the command line or the web UI workflow. + +@include '/hcp-network/create-aws-tgw-hcp.mdx' + + + + +@include '/hcp-network/create-aws-tgw-terminal.mdx' + + + + +@include '/hcp-network/create-aws-tgw-web.mdx' + + + + +## AWS Cloud WAN considerations + +Cloud WAN Attachments are not currently supported directly with HVNs, but peering a Transit Gateway Attachment to a Cloud WAN Segment +is supported by Amazon Web Services and would allow for connectivity to the HVN. For more information, refer to the AWS Documentation for +[Cloud WAN Peerings](https://docs.aws.amazon.com/network-manager/latest/cloudwan/cloudwan-peerings.html) and +[TGW Peerings](https://docs.aws.amazon.com/vpc/latest/tgw/tgw-peering.html). + +## Next steps + +After you create the attachment, you must create a route to direct traffic to +your VPCs. For more information, refer to [Routes](/hcp/docs/hcp/network/hvn-aws/routes). + +## Tutorial + +- [Connect an Amazon Transit Gateway to your HashiCorp Virtual Network](/hcp/tutorials/networking/amazon-transit-gateway). diff --git a/content/hcp-docs/content/docs/hcp/network/hvn-azure/hub-spoke-options.mdx b/content/hcp-docs/content/docs/hcp/network/hvn-azure/hub-spoke-options.mdx new file mode 100644 index 0000000000..53031afc23 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/network/hvn-azure/hub-spoke-options.mdx @@ -0,0 +1,191 @@ +--- +page_title: Hub and spoke options +description: |- + Learn about the supported Azure hub and spoke options with HashiCorp Virtual Network (HVN) peering. +--- + +# Hub and spoke options + + +Hub and Spoke Support for Azure HVNs is currently in Public Beta + + +This documentation focuses on Azure peering and the additional configurations +for advanced network topologies, commonly referred to as +[hub-and-spoke](https://learn.microsoft.com/en-us/azure/architecture/networking/architecture/hub-spoke). + +For hub-and-spoke networking in Azure, additional settings must be enabled on the HVN Peering Connection +and specific routing configurations must be added to the HVN and any other route tables +used by applications communicating with HCP. + +## Supported network topologies + + + + +A network virtual appliance (NVA) topology is composed of a central hub network +with an NVA such as Azure Firewall or 3rd party router and multiple spoke +networks connected via Virtual Network Peering or on-premises networks connected +via ExpressRoute or VPN. + +This topology is one of the [architectures recommended by +Microsoft](https://learn.microsoft.com/en-us/azure/architecture/networking/guide/spoke-to-spoke-networking#pattern-2-spokes-communicating-over-a-network-appliance) +for centrally managing transitive routing between spoke virtual networks and on-premises networks. + +![Diagram showing a HVN peered with Azure hub network](/img/hvn/diagram-hvn-nva-light.png) + +### Peering Connection Configuration + +Enable hub-and-spoke configuration: +- Set "Traffic forwarded from remote virtual network" to Allow +- Set "Allow remote gateways" to Disallow + +![Screenshot of HCP Peering Connection UI showing "Traffic forwarded from remote virtual network" set to Allow +and "Allow remote gateways" set to Disallow.](/img/hvn/ui-hvn-nva-peering-light.png) + +Refer to [Peering connections](/hcp/docs/hcp/network/hvn-azure/hvn-peering) for +more details about configuration of HVN Peering Connections. + +### Routing Configuration + +Define one or more HVN Routes for all CIDR Ranges that HCP Vault Dedicated needs +to reach. + +Configure the HVN Peering connection for the hub-and-spoke NVA topology: + +- Set "Next Hop Type" to Virtual Appliance +- Set "Next Hop IP" to your NVA IP Address. +![Screenshot of HCP Route UI showing Next Hop Type set to Virtual Appliance and Next Hop IP.](/img/hvn/ui-hvn-nva-route-light.png) + +The route table(s) used by the NVA for routing rules will need to be configured +with a route to the HVN. Each subnet within a given Spoke Virtual Network and +any on-premises network routers will also need to have a route defined to reach +the HVN. + +Refer to [Routes](/hcp/docs/hcp/network/hvn-azure/routes) for more details about +configuration of HVN Routes and for the specific route patterns required. + + + + + + +Azure VPN Gateways support static or dynamic routes. HVNs only support static +route configurations by default. If dynamic route propagaton via BGP is required +without a Network Virtual Appliance, reach out to [HashiCorp +Support](https://support.hashicorp.com) prior to creating your HVN and HCP Vault +Dedicated cluster. + + + +A VPN gateway topology is composed of a central hub network and +multiple spoke networks connected via Virtual Network Peering and a VPN Virtual Network Gateway. + +This topology can be used to transitively route traffic between spoke virtual +networks, but [is not recommended by +Microsoft](https://learn.microsoft.com/en-us/azure/architecture/networking/guide/spoke-to-spoke-networking#pattern-2-spokes-communicating-over-a-network-appliance). +![Diagram showing an HVN peered through a hub virtual network with a VPN Virtual Network Gateway as the transitive routing appliance](/img/hvn/diagram-hvn-vpn-light.png) + +### Peering Connection Configuration + +Enable hub-and-spoke configuration: + +- Set "Traffic forwarded from remote virtual network" to Allow +- Set "Allow remote gateways" to Allow + +![Screenshot of HCP Peering Connection UI showing "Traffic forwarded from remote +virtual network" set to Allow and "Allow remote gateways" set to +Allow.](/img/hvn/ui-hvn-gw-peering-light.png) + +Refer to [Peering connections](/hcp/docs/hcp/network/hvn-azure/hvn-peering) for +more details about configuration of HVN Peering Connections. + +### Routing Configuration + + + +When using Dynamic Routes, Static HVN Routes are still needed to configure +the HVN Network Security Groups, but the routes themselves will be ignored by +the gateway. + + + +Define one or more HVN Routes for all CIDR Ranges that HCP Vault Dedicated needs +to reach. + +Configure the HVN Peering connection for the hub-and-spoke NVA topology: + +- Set "Next Hop Type" to Virtual Network Gateway + +![Screenshot of HCP Route UI showing Next Hop Type set to Virtual Network Gateway.](/img/hvn/ui-hvn-gw-route-light.png) + +Each subnet within a given Spoke Virtual Network and any on-premises network +routers will need to have a route defined to reach the HVN. + +Refer to [Routes](/hcp/docs/hcp/network/hvn-azure/routes) for more details about +configuration of HVN Routes and for the specific route patterns required. + + + + + + +Azure ExpressRoute Gateways only support dynamic route propagation via BGP. HVNs +only support static route configurations by default. If using an ExpressRoute +Gateway without a Network Virtual Appliance, reach out to [HashiCorp +Support](https://support.hashicorp.com) prior to creating your HVN and HCP Vault +Dedicated cluster. + + + +An ExpressRoute gateway topology is composed of a central hub network and +multiple spoke networks connected via Virtual Network Peering and an ExpressRoute Virtual Network Gateway. + +This topology can be used to transitively route traffic between spoke virtual +networks, but [is strongly discouraged by +Microsoft](https://learn.microsoft.com/en-us/azure/architecture/networking/guide/spoke-to-spoke-networking#pattern-2-spokes-communicating-over-a-network-appliance). +![Diagram showing an HVN peered through a hub virtual network with an ExpressRoute Virtual Network Gateway as the transitive routing appliance](/img/hvn/diagram-hvn-er-light.png) + +### Peering Connection Configuration + +Enable hub-and-spoke configuration: + +- Set "Traffic forwarded from remote virtual network" to Allow +- Set "Allow remote gateways" to Allow +![Screenshot of HCP Peering Connection UI showing "Traffic forwarded from remote virtual network" set to Allow and "Allow remote gateways" set to Allow.](/img/hvn/ui-hvn-gw-peering-light.png) + +Refer to [Peering connections](/hcp/docs/hcp/network/hvn-azure/hvn-peering) for +more details about configuration of HVN Peering Connections. + +### Routing Configuration + + + +When using Dynamic Routes, Static HVN Routes are still needed to configure the +HVN Network Security Groups, but the routes themselves will be ignored by the +gateway. + + + +Define one or more HVN Routes for all CIDR Ranges that HCP Vault Dedicated needs to reach. + +Configure the HVN Peering connection for the hub-and-spoke ExpressRoute Gateway topology: + +- Set "Next Hop Type" to Virtual Network Gateway +![Screenshot of HCP Route UI showing Next Hop Type set to Virtual Network Gateway.](/img/hvn/ui-hvn-gw-route-light.png) + +Refer to [Routes](/hcp/docs/hcp/network/hvn-azure/routes) for more details about +configuration of HVN Routes and for the specific route patterns required. + + + + + +HVN connectivity via Azure Virtual WAN Connections is not currently supported. + + + + + +## Additional Resources +- [NVA Topology Setup Example with Terraform Provider for HCP and AzureRM](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/guides/peering-azure#peer-an-azure-vnet-to-an-hvn---network-virtual-appliance-nva-support) diff --git a/content/hcp-docs/content/docs/hcp/network/hvn-azure/hvn-azure.mdx b/content/hcp-docs/content/docs/hcp/network/hvn-azure/hvn-azure.mdx new file mode 100644 index 0000000000..05fb2d068f --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/network/hvn-azure/hvn-azure.mdx @@ -0,0 +1,52 @@ +--- +page_title: Create and Manage an HVN +description: |- + This topic describes how to create and manage a HashiCorp Virtual Network (HVN) for Microsoft Azure. +--- + +# Create and manage an HVN + +You can create and manage a HashiCorp Virtual Network (HVN) for Microsoft Azure. Use an HVN to delegate an IPv4 CIDR range to HCP. The platform uses this CIDR range to automatically create a virtual network (VNet). + +## Specification + +- You can create one HVN for each available cloud region. +- Resources added to an HVN appear in the HVN's cloud region. Deploying a cluster into an HVN created in the westus2 region, for example, adds the cluster to the westus2 region. +- All HCP resources must be located in one HVN. A single product deployment cannot span two different HVNs. +- You cannot move product deployments from one HVN to another. +- You cannot change HVNs after you deploy them. + + +## Create an HVN + +@include '/hcp-network/create-hvn-azure.mdx' + +## Connect an HVN to Microsoft Azure + +To connect your HashiCorp Virtual Network to your Azure infrastructure, you must first create a peering connection. Then, specify traffic routes so that clusters can communicate with client resources. Individual configuration instructions are available: + +* [Peering Connections](/hcp/docs/hcp/network/hvn-azure/hvn-peering) +* [Routes](/hcp/docs/hcp/network/hvn-azure/routes) +* [Network Security Groups](/hcp/docs/hcp/network/hvn-azure/security-groups) + +## Manage an HVN + +You cannot modify HVNs after you deploy them, but some management features are available. + +### Import to Terraform + +To import and manage your HVN in Terraform, complete the following steps: + +1. Sign in to [the HCP Portal](https://portal.cloud.hashicorp.com/) and select your organization. +1. From the sidebar, click **HashiCorp Virtual Network**. +1. Click on an HVN in the **ID** column. +1. From the **Manage** menu, copy the provided ``terraform import`` command. +1. Open your terminal and run the command. + +### Delete an HVN + +1. Sign in to [the HCP Portal](https://portal.cloud.hashicorp.com/) and select your organization. +1. From the sidebar, click **HashiCorp Virtual Network**. +1. Click on an HVN in the **ID** column. +1. From the **Manage** menu, click **Delete**. +1. When prompted, select **Confirm**. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/network/hvn-azure/hvn-peering.mdx b/content/hcp-docs/content/docs/hcp/network/hvn-azure/hvn-peering.mdx new file mode 100644 index 0000000000..787224df94 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/network/hvn-azure/hvn-peering.mdx @@ -0,0 +1,74 @@ +--- +page_title: Peering connections +description: |- + This topic describes how to create a peering connection between HCP and your virtual network (VNet) in Microsoft Azure. +--- + +# Peering connections + +You can create a peering connection between HashiCorp Cloud Platform (HCP) and +your virtual private cloud (VPC) in Azure to allow traffic between services. + +## Overview + +HCP Consul Dedicated and HCP Vault Dedicated uses a peering connections to communicate with the clients +hosted in your Azure environment. + +You can create a peering connections from [the HCP Portal](https://portal.cloud.hashicorp.com/) or the +HCP provider in Terraform. For instructions on how to create peering connections +with Terraform, refer to the [HCP provider +documentation](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/guides/peering-azure). + +## Requirements + +- An Azure account ID +- The ID of the VNet you wish to connect +- VNets must be configured with [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918) or [RFC6598 specification](https://datatracker.ietf.org/doc/html/rfc6598) IP addresses. + +## Create peering connections + +To set up a peering connection, you need to configure the connection request in +HCP and then configure a corresponding request in Azure. + + + +When peering an HVN with an Azure hub and spoke topology network, there are +additional considerations to ensure traffic is properly routed from spoke +networks. + +Refer to the [Hub and spoke +options](/hcp/docs/hcp/network/hvn-azure/hub-spoke-options) documentation for +more information. + + + +### Configure the connection request in HCP + +@include '/hcp-network/configure-azure-connection.mdx' + +The HVN sends a peering connection request to Azure. The peering request expires +after seven days. The status of the connection appears as pending until either +the connection process is completed or the request expires. + +### Accept the connection request in Azure + +HCP generates terminal commands that you can copy and paste into your Azure CLI +to configure the corresponding connection request. HCP also provides links to +the Azure documentation if you prefer to use the Azure browser interface. + +@include '/hcp-network/complete-azure-connection.mdx' + +You can also create the second request from the Azure console. For information +about creating VNet peering connections, refer to the [Azure +documentation](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview). + +## Next steps + +The HVN peering connection does not contain routing information. Once the +connection is active, you can add a route for all or part of the VNet CIDR +range. For more details, refer to +[Routes](/hcp/docs/hcp/network/hvn-azure/routes). + +## Tutorial + +- [Peering an Azure Virtual Network with HashiCorp Cloud Platform](/hcp/tutorials/networking/azure-peering-hcp). diff --git a/content/hcp-docs/content/docs/hcp/network/hvn-azure/routes.mdx b/content/hcp-docs/content/docs/hcp/network/hvn-azure/routes.mdx new file mode 100644 index 0000000000..bf343fa6dc --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/network/hvn-azure/routes.mdx @@ -0,0 +1,188 @@ +--- +page_title: HVN Routes +description: |- + This topic describes how to create routes in HCP. Routes are rules in the HashiCorp Virtual Network (HVN) route table that direct network traffic between the HVN and a target connection. +--- + +# Routes + +Routes are rules in the HashiCorp Virtual Network (HVN) route table that direct +network traffic between the HVN and a target connection. + +## Overview + +Routes are a necessary part of the HVN configuration. They provide a networking +abstraction that enables network traffic between the HVN and a target HVN +connection, such as a peering connection. + +Routes enable communication between the destination and all clusters in the HVN, +including clusters created after the initial deployment. When you create a +route, it is added to the route table of the HVN. HCP uses the route table to +communicate with your cloud provider’s resources. + +Routes have two components for network traffic: + +- The _destination_ is specified by the CIDR block of the resource you want to reach through your target. +- A _target_ (also known as Next Hop) is where traffic for the _destination_ should be routed, like a Virtual Network Peering connection, Virtual Appliance, or Virtual Network Gateway. + +## Route table planning + +Routing between a HVN and Azure network will vary based on the Azure network +topology. In the most basic configuration, a route is needed from the HVN to the +Azure Virtual Network (VNet), and a route from the VNet to the HVN. + + + + +#### HVN Routes +| Destination | Next hop type | Next hop IP | +| ----------------------------- | --------------------------- | ----------------------------- | +| [Peered VNet CIDR] | Virtual Network Peering | n/a | + +#### Peered VNet Routes +_This route is usually created automatically when the peering is established._ + +| Destination | Next hop type | Next hop IP | +| ----------------------------- | --------------------------- | ----------------------------- | +| [HVN CIDR] | Virtual Network Peering | n/a | + + + + +#### HVN Routes + +The HVN should be configured with an individual route for each summarizable RFC-1918 or RFC-6598 (CG-NAT) CIDR Range that it may need to communicate with. Public IPs are not supported. + + +| HCP destination | Next hop type | Next hop IP | +| ------------------------- | --------------------------- | ----------------------------- | +|[Spoke VNet or On-Prem CIDR]| Virtual Appliance | [NVA IP Address] | + +#### Hub VNet or Router Configuration Routes + +| Gateway subnet destination | Next hop type | Next hop IP | +| ----------------------------- | --------------------------- | ----------------------------- | +| [HVN CIDR] | Virtual Appliance | [NVA IP Address] | + +| Firewall subnet destination | Next hop type | Next hop IP | +| ----------------------------- | --------------------------- | ----------------------------- | +| [HVN CIDR] | Virtual Network Peering | n/a | + +#### Spoke VNet or On-premises Network Routes + +| Spoke subnet destination | Next hop type | Next hop IP | +| ----------------------------- | --------------------------- | ----------------------------- | +| 0.0.0.0/0 | Virtual Appliance | [NVA IP Address] | + +| On-premises destination | Next hop type | Next hop IP | +| ----------------------------- | --------------------------- | ----------------------------- | +| [HVN CIDR] | Virtual Appliance | [NVA IP Address] | + + + + + + +Azure VPN Gateways support static or dynamic routes. HVNs only support static +route configurations by default. If dynamic route propagaton via BGP is required +without a Network Virtual Appliance, reach out to [HashiCorp +Support](https://support.hashicorp.com) prior to creating your HVN and HCP Vault +Dedicated cluster. + + + +#### HVN Routes + +The HVN should be configured with an individual route for each summarizable RFC-1918 or RFC-6598 (CG-NAT) CIDR Range that it may need to communicate with. Public IPs are not supported. + + +| HCP destination | Next hop type | Next hop IP | +| ------------------------- | --------------------------- | ----------------------------- | +|[Spoke VNet or On-Prem CIDR]| Virtual Network Gateway | n/a | + +#### Hub VNet Routes + +| Gateway subnet destination | Next hop type | Next hop IP | +| ----------------------------- | --------------------------- | ----------------------------- | +| [HVN CIDR] | Virtual Network Peering | n/a | + +#### Spoke VNet or On-premises Network Routes + +| Spoke subnet destination | Next hop type | Next hop IP | +| ----------------------------- | --------------------------- | ----------------------------- | +| [HVN CIDR] | Virtual Network Gateway | n/a | + +| On-premises destination | Next hop type | Next hop IP | +| ----------------------------- | --------------------------- | ----------------------------- | +| [HVN CIDR] | Virtual Network Gateway | n/a | + + + + + + +Azure ExpressRoute Gateways only support dynamic route propagation via BGP. HVNs +only support static route configurations by default. If using an ExpressRoute +Gateway without a Network Virtual Appliance, reach out to [HashiCorp +Support](https://support.hashicorp.com) prior to creating your HVN and HCP Vault +Dedicated cluster. + + + +#### HVN Routes + +The HVN should be configured with an individual route for each summarizable RFC-1918 or RFC-6598 (CG-NAT) CIDR Range that it may need to communicate with. Public IPs are not supported. + + +| HCP destination | Next hop type | Next hop IP | +| ------------------------- | --------------------------- | ----------------------------- | +|[Spoke VNet or On-Prem CIDR]| Virtual Network Gateway | n/a | + +#### Spoke VNet or On-premises Network Routes + +| On-premises destination | Next hop type | Next hop IP | +| ----------------------------- | --------------------------- | ----------------------------- | +| [HVN CIDR] | Virtual Network Gateway | n/a | + + + + +## Create a route + +@include '/hcp-network/create-azure-routes.mdx' + +To add more than one route to the table, repeat these steps as necessary. + +## Configure network security groups + +After you configure a target connection and specify the routes for the HVN to +connect to your VNet, you may need to configure security groups to open the +virtual firewall between your HVN and cloud network. + +Refer to [Network Security +Groups](/hcp/docs/hcp/network/hvn-azure/security-groups) for information +specific to HCP. + +### Route table reference + +Route tables in HCP include the following fields: + +- **ID**: The name the route was given. +- **Destination**: The destination CIDR block range configured in the route. +- **Target**: + - The value is the ID of the peering connection. + - When you click on the target, it opens the target’s configuration screen. +- **Status**: Shows if the route is active, pending, or failed. +- **Target type**: Indicates that the route connects a peering connection. + +To delete a route entry, choose **Delete** from the ellipsis menu. When prompted, confirm that you want to remove the route. + +### CIDR block reference + +The following rules apply to CIDR blocks specified in the route configuration: + +* CIDR blocks must follow either the [RFC1918 specification](https://datatracker.ietf.org/doc/html/rfc1918) or the [RFC6598 specification](https://datatracker.ietf.org/doc/html/rfc6598). +- HCP does not accept publicly routable addresses because they could overlap with addresses of services used for HCP management and operations. +- CIDR blocks configured in the route cannot overlap with the parent HVN. +- Different routes in the HVN can specify the same CIDR blocks, but the route - with the narrowest CIDR definition takes priority when routing network traffic. +- Routes cannot have a narrower CIDR definition than an existing route that targets a peering connection. diff --git a/content/hcp-docs/content/docs/hcp/network/hvn-azure/security-groups.mdx b/content/hcp-docs/content/docs/hcp/network/hvn-azure/security-groups.mdx new file mode 100644 index 0000000000..5ecf3c3249 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/network/hvn-azure/security-groups.mdx @@ -0,0 +1,61 @@ +--- +page_title: Security Groups +description: |- + This topic describes the security group settings required to open the virtual firewall between your HVN and cloud network. +--- + +# Network security groups + +You can configure network security group settings to open the virtual firewall between your HVN and your Azure cloud network. + +## Overview + +A _network security group_ is an entity in Azure that functions as a virtual firewall between your Azure instances. Security groups manage protocol and port permissions for Azure traffic in order to control inbound and outbound traffic. For additional information, refer to the Azure documentation on [How network security groups filter network traffic](https://docs.microsoft.com/en-us/azure/virtual-network/network-security-group-how-it-works). + +To establish communication between your HashiCorp Virtual Network (HVN) and your Azure VNet, you must: +* Create a security group. +* Configure _ingress_ (inbound) rules. +* Configure _egress_ (outbound) rules. + +To configure security group rules, you can use either the Azure portal or the Azure Command Line Interface. + +-> **Tip**: Creating custom security group configurations for your HCP products improves infrastructure security. However, administrative flexibility may reduce over time as you introduce multiple service deployments. + +## Update network security groups + +@include '/hcp-network/configure-azure-security-group.mdx' + +## Network security group rules for HCP Consul Dedicated reference + +### Inbound rules + +To allow inbound traffic from your HVN, specify the following rules on your Azure VNet: + +| Priority | Name | Port | Protocol | Source | Destination | Action | +| -------- | :-----------------: | :--: | :-------: | :------------: | :------------: | :----: | +| 400 | ConsulServerInbound | 8301 | Any | HVN-CIDR | VirtualNetwork | Allow | +| 401 | ConsulClientInbound | 8301 | Any | VirtualNetwork | VirtualNetwork | Allow | + +### Outbound rules + +| Priority | Name | Port | Protocol | Source | Destination | Action | +| -------- | :------------------: | :-------: | :-------: | :------------: | :------------: | :----: | +| 400 | HTTPOutbound | 80 | Any | VirtualNetwork | HVN-CIDR | Allow | +| 401 | HTTPSOutbound | 443 | Any | VirtualNetwork | HVN-CIDR | Allow | +| 402 | ConsulServerOutbound | 8300-8301 | Any | VirtualNetwork | HVN-CIDR | Allow | +| 403 | ConsulClientOutbound | 8301 | Any | VirtualNetwork | VirtualNetwork | Allow | +| 404 | GRPCOutbound | 8502 | Any | VirtualNetwork | HVN-CIDR | Allow | + +## Network security group rules for HCP Vault Dedicated reference + +To allow traffic between your Vault cluster and Azure, specify egress (outbound) rules on your Azure VNET. +Ingress rules are not required to allow traffic from Vault clusters. + +### Egress + +To allow outbound traffic from your VNet, add the following rules to your +security group for HCP Vault Dedicated: + +| Priority | Name | Port | Protocol | Source | Destination | Action | +| -------- | :------------------: | :-------: | :-------: | :------------: | :------------: | :----: | +| 400 | VaultClientOutbound | 8200 | TCP | VirtualNetwork | HVN-CIDR | Allow | diff --git a/content/hcp-docs/content/docs/hcp/network/index.mdx b/content/hcp-docs/content/docs/hcp/network/index.mdx new file mode 100644 index 0000000000..9565551543 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/network/index.mdx @@ -0,0 +1,36 @@ +--- +page_title: HashiCorp Virtual Networks +description: |- + A HashiCorp Virtual Network (HVN) delegates an IPv4 CIDR range to HCP so that the platform can automatically create resources in your cloud network. +--- + +# HashiCorp Virtual Network + +The HashiCorp Virtual Network (HVN) makes HashiCorp Cloud Platform (HCP) networking possible. An HVN delegates an IPv4 CIDR range that HCP uses to automatically create resources in your cloud network. + +All HCP resources must be located in a single HVN. You cannot use a single product deployment across two HVNs, and product deployments cannot be moved from one HVN to another. You also cannot make changes to HVNs after you create them. + +You can manually configure your HVN or use HCP to provision an HVN with default configurations. To learn more, follow the tutorial for your supported infrastructure environment: + +## Amazon Web Services (AWS) + +### HCP Consul Dedicated + +- [Consul getting started tutorial](/consul/tutorials/get-started-hcp/hcp-gs-deploy) +- [Consul cluster deployment tutorial](/hcp/tutorials/consul-cloud/consul-deploy) + +### HCP Vault Dedicated + +- [Vault getting started tutorial](/vault/tutorials/cloud) +- [Vault deployment tutorial](/vault/tutorials/cloud-ops/terraform-hcp-provider-vault) + +## Microsoft Azure + +### HCP Consul Dedicated + +- [Get started with end-to-end deployment configuration tutorial](/consul/tutorials/cloud-deploy-automation/consul-end-to-end-overview) +- [Deploy HCP Consul Dedicated with AKS using Terraform](/consul/tutorials/cloud-deploy-automation/consul-end-to-end-aks) + +### HCP Vault Dedicated + +- [Vault getting started tutorial](/vault/tutorials/cloud) diff --git a/content/hcp-docs/content/docs/hcp/network/vpn-gcp.mdx b/content/hcp-docs/content/docs/hcp/network/vpn-gcp.mdx new file mode 100644 index 0000000000..1194bb34f1 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/network/vpn-gcp.mdx @@ -0,0 +1,514 @@ +--- +page_title: VPN access from GCP +description: |- + This topic describes how to create a VPN between HCP and GCP using AWS as a transit network. +--- + +# VPN access from GCP + + + +The concepts and steps described in this documentation show an example of how to +connect services running in a cloud provider that does not support native +connectivity with the HashiCorp Cloud Platform (HCP). + +The steps assume an advanced understanding on network topology and +configuration. + + + +The HashiCorp Cloud Platform (HCP) supports native connectivity solutions with multiple public cloud providers. +Through the supported cloud providers, customers can enable common hybrid cloud networking models to support +workloads with providers not yet natively supported by HCP using a concept known as a transit network. + +A transit network acts as a bridge between multiple networks. Commonly, this may be used to connect different +VLAN or VXLAN networks. + +When running workloads in multiple public cloud providers, you can extend this model by connecting +the public cloud providers through their supported networking services. + +Compute resources running in the Google Cloud Platform can access private HCP +resources such as an HCP Consul Dedicated or HCP +Vault Dedicated cluster by creating a VPN between GCP and a transit AWS VPC, then connecting the HCP HashiCorp +Virtual Network (HVN) with the transit AWS VPC and configuring the necessary routing to direct traffic +between the networks. + +![diagram-hcp-aws-gcp-transit-network](/img/docs/diagram-hcp-aws-gcp-transit-network.png) + +By following this documentation, you will create a Virtual Private Network (VPN) between AWS and GCP, create a transit +gateway connection between HCP and AWS, and configure routing between each of the three platforms. When +the network connectivity is complete, you will deploy a private instance of HCP Vault Dedicated and access it +from a GCP VM instance to verify connectivity. + +The majority of the instructions on this page is guided through user interface (UI) +of AWS, GCP, and HCP. + + + + It is recommended to follow this documentation using test accounts for GCP, AWS, and HCP. + Changes will be made that may impact connectivity of existing services. Be sure to validate the + configuration changes will be supported in a production environment. + + + +## Prerequisites + +- An AWS and GCP account with the default configurations +- An [HCP Account](https://portal.cloud.hashicorp.com/sign-in?utm_source=learn) +- Non-overlapping network CIDR ranges for HCP HVN, AWS and GCP VPCs. +- An HCP user assigned the contributor role (or higher) to perform the following actions: + - Create an HVN, HCP Vault Dedicated cluster, transit gateway attachment + - Update the HVN route table +- AWS permissions to perform the following actions: + - Create a transit gateway, customer gateway, site to site VPN, and shared shared resource access manager + - Accept a transit gateway connections +- GCP permissions to perform the following actions + - Create a cloud router, VPN, and VM instance + +## Create GCP VPN + +You will start the configuration of the GCP side of the VPN tunnel. When you have enough information from the GCP +side you will then configure the AWS side of the VPN before switching back to complete the setup in GCP. + +1. Open a web browser and log into your GCP account. + +1. From the Google Cloud console click the hamburger menu and navigate to **Networking >> Hybrid Connectivity >> VPN**. + +1. Click **Create VPN Connection**. + +1. Select **High-availability (HA) VPN** and click **Continue**. + +1. Enter the following information: + + - **VPN Gateway name:** `gcp-vpn-to-aws` + - **Network:** Select **default** + - **Region:** Select **us-east1** + - **VPN tunnel inner IP stack type:** - Select **IPv4 (single stack)** + +1. Click **Create & Continue**. + +1. Make note of the provided IP addresses for the new VPN gateway. + +1. Remain logged into your GCP account. You will return to this page to continue with +the configuration. + +## Create AWS VPN + +Now that you have the IP addresses for the GCP VPN gateway, you can begin the setup of the AWS side +of the VPN. + +1. Open a new browser (or browser tab) and log into your AWS account. + +1. Verify you are in us-east-1 (**N.Virginia**). + +1. From the AWS console click the **Services** menu and navigate to **Networking & Content Delivery >> VPC**. + +1. Click **Customer gateways** in the left navigation menu. + +1. Click **Create customer gateway** and enter the following information: + + - **Name tag - optional:** `cgw-in-gcp` + - **BGP ASN:** `65100` + - **IP address:** Enter the IP address for interface 0 from the GCP VPN gateway + +1. Click **Create customer gateway**. + +1. Click **Transit gateways** in the left navigation menu. + +1. Click **Create transit gateway** and enter the following information: + + - **Name tag:** `tgw-for-hcp` + - **Amazon side Autonomous System Number (ASN):** `64512` + +1. Click **Create transit gateway**. + +1. Click **Site-to-Site VPN connections** in the left navigation menu. + +1. Click **Create VPN connection** and enter the following information: + + - **Name tag - optional:** `aws-vpn-to-gcp` + - **Target gateway type:** Select **Transit gateway** + - **Transit gateway:** Select the **tgw-for-hcp** + - **Customer gateway:** Select **Existing** + - **Customer gateway ID:** Select **cgw-in-gcp** + +1. Click **Create VPN connection**. + +1. Wait for the **State** to change from **Pending** to **Available** then select **aws-vpn-to-gcp**. + +1. Click **Download configuration**. + +1. In the **Vendor** pulldown menu, select **Generic** and then click **Download**. + + You will use the values provided in the configuration file to complete the VPN and routing setup on the GCP side. + +## Review the AWS configuration file + +To complete the VPN setup you need to get the relevant VPN configuration from the downloaded configuration file. + +1. Open the configuration file in your preferred text editor. + +1. Locate the **IPSec Tunnel #1** section. + +1. In section **#1: Internet Key Exchange Configuration**, make note of the **Pre-Shared Key**. + + + + ```plaintext + #1: Internet Key Exchange Configuration + + ...snip... + + - IKE version : IKEv1 + - Authentication Method : Pre-Shared Key + - Pre-Shared Key : CJ.ykyfeSOr61oWvjk2C5dbecuAW2wVs + - Authentication Algorithm : sha1 + - Encryption Algorithm : aes-128-cbc + - Lifetime : 28800 seconds + - Phase 1 Negotiation Mode : main + - Diffie-Hellman : Group 2 + ``` + + + +1. Locate section **#3: Tunnel Interface Configuration**. + +1. Make note of the **Outside IP Addresses** and **Inside IP Addresses**. + + + + ```plaintext + #3: Tunnel Interface Configuration + + ...snip... + + Outside IP Addresses: + - Customer Gateway : 35.242.15.66 + - Virtual Private Gateway : 3.212.197.15 + + Inside IP Addresses + - Customer Gateway : 169.254.221.74/30 + - Virtual Private Gateway : 169.254.221.73/30 + + Configure your tunnel to fragment at the optimal size: + - Tunnel interface MTU : 1436 bytes + + ``` + + + +1. Remain logged into your AWS account. You will return to this page to continue with + the configuration. + +## Connect VPN between GCP and AWS + +1. Return to the GCP console. + + + + You should return to the [create GCP VPN](#create-gcp-vpn) section. + + + +1. Under **Peer VPN gateway** select **On-prem or Non Google Cloud**. + +1. In the **Peer VPN gateway name** pulldown menu, select **Create new peer VPN gateway**. + +1. Enter the following information: + + - **Name:** `cgw-in-aws` + - **Interfaces:** Select **one interface** + - **Interface 0 IP address:** Enter the outside IP address for the **Virtual Private Gateway** + from the downloaded AWS configuration file. For example, using the sample configuration above enter `3.212.197.15`. + +1. Click **Create**. You will be returned to the **Create a VPN** wizard. + + **Create a single VPN tunnel** will be automatically selected for you. + +1. Under **Routing options** click the **Cloud Router** pulldown menu and select **Create new router**. + +1. Enter the following information: + + - **Name**: `gcp-default-cr` + - **Google ASN:** `65100` + + + + **Advertise all subnets visible to the Cloud Router (Default)** is used in + this example. + For production configurations, you should follow your organizations network and security practices to + choose between advertising all routes, or creating custom routes. + + + +1. Click **Create**. You will be returned to the **Create a VPN** wizard. + +1. In the **Name** text box enter `aws-vpn-tunnel-1`. + +1. In the **IKE pre-shared key** text box, enter the pre-shared key from the [AWS configuration file](#review-the-aws-configuration-file). + For example, using the sample configuration above enter `CJ.ykyfeSOr61oWvjk2C5dbecuAW2wVs`. + +1. Click **Create & continue**. + +1. Under **Configure BGP sessions**, click **Configure BGP session**. + +1. Enter the following information: + + - **Name:** `aws-bgp-peer` + - **Peer ASN:** `64512` + +1. Click **Save and continue**. You will be returned to the **Create a VPN** wizard. + +1. Click **Save BGP configuration**. + +1. Under **Summary and reminder** click **OK**. + +1. From the **Cloud VPN Tunnels** tab, click **aws-vpn-tunnel-1**. + +1. Click **Edit BGP session**. + +1. Change the **Cloud Router BGP IPv4 address** to inside IP address for the + **Customer Gateway** from the downloaded [AWS configuration + file](#review-the-aws-configuration-file). + For example, using the sample configuration above enter the Inside IP Address for the Customer Gateway, `169.254.221.74`. + +1. Change the **BGP peer Router BGP IPv4 address** to inside IP address for the + **Virtual Private Gateway** from the downloaded [AWS configuration + file](#review-the-aws-configuration-file). + For example, using the sample configuration above enter the Inside IP Address for the Virtual Private Gateway, `169.254.221.73`. + +1. Click **Save and continue**. + +1. The **BPG session** status should change to **BGP established**. + + + + If the status does not change, refresh the browser page (or browser tab). + + + +## Create and connect HCP resources + +Now that you have established a connection from your GCP account to the AWS account being used as a transit network, you will +deploy and configure a HashiCorp Virtual Network (HVN) and connect the HVN to the AWS transit gateway. + +### Create HVN and transit gateway attachment + + + +Each HashiCorp Virtual Network (HVN) is created in a project based on a user selected region. +The HVN hosts other HCP resources such as HCP Vault Dedicated and HCP Consul Dedicated clusters. + + + +1. Open a new browser (or browser tab) and log into your HCP account. + +1. Click **HashiCorp Virtual Networks** in the left navigation menu. + +1. Click **Create network** and enter the following information: + + - **Network name:** `hvn` + - **Provider:** Select **Amazon Web Services** + - **Region selection:** Select **N. Virginia (us-east-1)**. + +1. Click **Create network**. + + Wait for the HVN to be available with a status of **Stable** before proceeding. + +1. Click **Transit gateway attachments** in the left navigation menu. + +1. Click **Create attachment** and click the **Web console** tab. + +1. Enter `tgw-attach-hcp` in the **Attachment ID** field. + +1. Copy the **AWS Account ID**. + +1. Remain logged into your HCP account. You will return to this page to continue with +the configuration. + +### Create AWS resource share + +1. Return to the AWS console and navigate to **Resource Access Manager**. + +1. Click **Create a resource share**. + +1. In the **Name** field enter `hcp-tgw-ram`. + +1. Under **Resources - optional** select **Transit Gateways**. + +1. Click the checkbox for **tgw-for-hcp**. + +1. Click **Next** and then click **Next** again. + +1. Under **Principals** paste the AWS account ID you copied from [the HCP Portal](https://portal.cloud.hashicorp.com/) and click **Add**. + +1. Click **Next**. + +1. Click **Create resource share**. + +1. Copy the **ARN** for the resource share. + +1. Navigate back to the VPC console and click **Transit gateways** in the left navigation menu. + +1. Copy the transit gateway ID for **tgw-for-hcp**. + +### Complete transit gateway attachment + +1. Return to [the HCP Portal](https://portal.cloud.hashicorp.com/). + +1. Enter the following information: + + - **Transit gateway ID:** ID for the transit gateway created previously in this tutorial. + - **Resource share ARN:** ARN for the resource share created previously in this tutorial. + +1. Click **Create attachment**. + +1. Return to the AWS VPC console and click **Transit gateway attachments** in the left navigation menu. + +1. Click the checkbox for the attachment with a resource type of **VPC** (it should be in the **Pending Acceptance** state). + +1. Click the **Actions** pulldown menu and select **Accept transit gateway attachment**. + +1. Click **Accept**. + +1. Wait for the state to change from **Pending** to **Available**. + +### Update HCP route table + +1. Switch back to [the HCP Portal](https://portal.cloud.hashicorp.com/). + +1. Click **Route table** in the left navigation menu. + +1. Click **Create route** and enter the following information: + + - **Route ID:** `route-aws-vpc` + - **Destinations:** Enter the subnet for your AWS VPC (You can retrieve this by clicking **Your VPCs** in the AWS console) + - **Target:** Select **tgw-attach-hcp** + +1. Click **Create route**. + +1. Click **Create route** again and enter the following information: + + - **Route ID:** `route-gcp-vpc` + - **Destinations:** Enter the subnet for your GCP VPC (You can retrieve this by clicking **VPC network >> VPC networks** in the GCP console) + - **Target:** Select **tgw-attach-hcp** + +1. Click **Create route**. + + + + For simplicity you created routes for the entire VPC in AWS and GCP. + For production configurations, you should follow your organizations network and security practices to + choose between creating routes for the entire VPC, or specific subnets. + + + +## Validate networking configuration + +You have created all the necessary resources in your AWS, GCP, and HCP accounts. To validate the configuration, you will +now deploy a private Vault Dedicated cluster and create a VM instance in GCP to test access across the AWS transit VPC. + +1. While still logged into [the HCP Portal](https://portal.cloud.hashicorp.com/), click **Back to Networks** and click **Vault** in the left navigation menu. + +1. Under **Start from scratch**, click **Create cluster**. + +1. Keep all defaults and verify the HVN you connected to the AWS transit gateway is selected. + +1. Click the slider for **Allow public connections from outside your selected network** to disable public access. + +1. Click **Create cluster**. + +1. While the cluster is being created, switch back to the GCP console. + +1. Click the hamburger menu and navigate to **Compute Engine >> VM instances**. + +1. Click **Create instance** and enter the following information: + + - **Name:** `test-hcp` + - **Region:** Select **us-east1** + +1. Keep all other defaults and click **Create**. + +1. When the instance becomes available, click **SSH**. A new window will open and log you into the VM. + +1. From [the HCP Portal](https://portal.cloud.hashicorp.com/), click **Generate token** then click **Copy** to copy the token. + +1. In the SSH session for your GCP VM instance, create an environment variable named `VAULT_TOKEN`. + + ```shell-session + $ export VAULT_TOKEN= + ``` + +1. From [the HCP Portal](https://portal.cloud.hashicorp.com/), under **Cluster URLs** click **Private** to copy the + private Vault Dedicated address. + +1. In the SSH session for your GCP VM instance, create an environment variable named `VAULT_ADDR`. + + ```shell-session + $ export VAULT_ADDR= + ``` + +1. Create an environment variable named `VAULT_NAMESPACE` with a value of `admin`. + + ```shell-session + $ export VAULT_NAMESPACE=admin + ``` + +1. Validate the connection to the private Vault Dedicated cluster by looking up your token information using cURL. + + ```shell-session + $ curl \ + --header "X-Vault-Token: $VAULT_TOKEN" \ + --header "X-Vault-Namespace: $VAULT_NAMESPACE" \ + --request LIST \ + $VAULT_ADDR/v1/auth/token/accessors + ``` + +1. The lookup will return information about the provided token. + + **Example output:** + + + + ```shell-session + $ curl \ + --header "X-Vault-Token: $VAULT_TOKEN" \ + --header "X-Vault-Namespace: $VAULT_NAMESPACE" \ + --request LIST \ + $VAULT_ADDR/v1/auth/token/accessors + + {"request_id":"36116197-e3f1-17c0-1b63-b7b0cb1c7f9a","lease_id":"","renewable":false,"lease_duration":0,"data":{"keys":["7G9uBnJW08zEwjnQBs73b0q8.BZD6Q"]},"wrap_info":null,"warnings":null,"auth":null} + ``` + + + + You have successfully made a request to a private HCP Vault Dedicated cluster from your GCP VM instance through a transit AWS VPC. + +## Cleanup + +To avoid unnecessary charges, you should clean up any resources you created during this tutorial. + +### GCP + +- Delete the test-hcp VM instance +- Delete the Cloud VPN tunnel +- Delete the Cloud VPN gateway +- Delete the peer VPN gateway +- Delete the cloud router + +### AWS + +- Delete the site-to-site VPN +- Delete the transit gateway attachments +- Delete the transit gateway +- Delete the customer gateway + +### HCP + +- Delete the Vault Dedicated cluster +- Delete the HVN + +## Help and reference + +- [Build HA VPN connections between Google Cloud and AWS](https://cloud.google.com/architecture/build-ha-vpn-connections-google-cloud-aws) +- [Connect an Amazon Transit Gateway to your HashiCorp Virtual Network](/hcp/docs/hcp/network/hvn-aws/tgw-attach) \ No newline at end of file diff --git a/content/hcp-docs/content/docs/hcp/security/index.mdx b/content/hcp-docs/content/docs/hcp/security/index.mdx new file mode 100644 index 0000000000..6a71d135e9 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/security/index.mdx @@ -0,0 +1,21 @@ +--- +page_title: HCP Security Overview +description: |- + This topic provides an overview of the HashiCorp Cloud Platform (HCP) security model. +--- + +# HCP Security Overview + +This topic describes the HashiCorp Cloud Platform's (HCP) security model and the security controls available to users. + +For more information about security offerings for specific products, refer to +[HCP Consul Dedicated](/hcp/docs/consul) and [HCP Vault Dedicated](/vault/docs/what-is-vault). For +information about HashiCorp's security teams and compliance programs, or to find +HashiCorp's public PGP keys and code signature verification, refer to [HashiCorp +Security and Trust Center](https://hashicorp.com/security). + +## Security Shared-Responsibility Model + +Security of the HashiCorp Cloud Platform (HCP) is a shared responsibility between HashiCorp and the customer. This shared model can help reduce the customer's operational burden, as HashiCorp manages and controls certain components of the system, such as management of the operating system (e.g. updates and security patches), while the customer assumes the responsibilities and management of access management, multi-factor authentication (MFA), and configuration of access control lists (ACLs). + +Please refer to [HashiCorp Cloud Platform Roles/Responsibilities](https://portal.cloud.hashicorp.com/shared-responsibility-model) for more information on this topic. diff --git a/content/hcp-docs/content/docs/hcp/supported-env/aws.mdx b/content/hcp-docs/content/docs/hcp/supported-env/aws.mdx new file mode 100644 index 0000000000..825be98293 --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/supported-env/aws.mdx @@ -0,0 +1,34 @@ +--- +page_title: AWS +description: |- + This topic describes HCP support Amazon Web Services (AWS) environments +--- + +# Support for AWS Environments + +HashiCorp Cloud Platform (HCP) supports Amazon Web Services (AWS) environments. For a general overview of the HCP architecture, refer to [How does HCP Work?](/hcp/docs/hcp#how-does-hcp-work). + +## Regions + +HCP services support AWS in these regions: + +| Region | Name | Available HCP Services | +|------------------|----------------|------------------------------------------------| +| Oregon | us-west-2 | Consul (October 2020), Vault (January 2021) | +| Virginia | us-east-1 | Consul (January 2021), Vault (March 2021) | +| Ohio | us-east-2 | Consul (February 2023), Vault (February 2023) | +| Canada (Central) | ca-central-1 | Consul (February 2023), Vault (February 2023) | +| Ireland | eu-west-1 | Consul (January 2021), Vault (March 2021) | +| London | eu-west-2 | Consul (January 2021), Vault (March 2021) | +| Frankfurt | eu-central-1 | Consul (January 2021), Vault (April 2021) | +| Tokyo | ap-northeast-1 | Consul (February 2023), Vault (February 2023) | +| Singapore | ap-southeast-1 | Consul (October 2021), Vault (October 2021) | +| Sydney | ap-southeast-2 | Consul (October 2021), Vault (October 2021) | + +Last Updated: February 8, 2023 + +Refer to the [HCP Data Privacy](https://www.hashicorp.com/trust/privacy/hcp-data-privacy) docs for more information about data privacy. + +### Request Region Support + +To request support for additional regions, complete the [HCP Region Survey](https://hashicorp.sjc1.qualtrics.com/jfe/form/SV_5dzdv5wCixVwnYO). diff --git a/content/hcp-docs/content/docs/hcp/supported-env/azure.mdx b/content/hcp-docs/content/docs/hcp/supported-env/azure.mdx new file mode 100644 index 0000000000..b7cc32026d --- /dev/null +++ b/content/hcp-docs/content/docs/hcp/supported-env/azure.mdx @@ -0,0 +1,38 @@ +--- +page_title: Azure +description: |- + This topic describes HCP support for Microsoft Azure environments +--- + +# Support for Azure Environments + +HashiCorp Cloud Platform (HCP) supports Microsoft Azure environments. For a general overview of the HCP architecture, refer to [How does HCP Work?](/hcp/docs/hcp#how-does-hcp-work). + +## Regions + +HCP services support Azure in these regions: + +| Region | Name | Available HCP Services | +|--------|------|------------------------| +|West US 2 | westus2 | Consul (June 2022), Vault (October 2022) | +|East US | eastus | Consul (June 2022), Vault (October 2022) | +|Central US | centralus | Consul (June 2022), Vault (October 2022) | +|East US 2 | eastus2 | Consul (June 2022), Vault (October 2022) | +|Canada Central | canadacentral | Consul (February 2023), Vault (February 2023) | +|Canada East | canadaeast | Vault (January 2025) | +|West Europe | westeurope | Consul (June 2022), Vault (October 2022) | +|North Europe | northeurope | Consul (June 2022), Vault (October 2022) | +|France Central | francecentral | Consul (June 2022), Vault (October 2022) | +|UK South | uksouth | Consul (June 2022), Vault (October 2022) | +|South East Asia | southeastasia | Consul (February 2023), Vault (February 2023) | +|Japan East | japaneast | Consul (February 2023), Vault (February 2023) | +|Australia SouthEast | australiasoutheast | Consul (February 2023), Vault (February 2023) | +|Canada East | canadaeast | Vault (January 2025) | + +**Last Updated**: January 30, 2025 + +Refer to the [HCP Data Privacy](https://www.hashicorp.com/trust/privacy/hcp-data-privacy) docs for more information about data privacy. + +### Request Region Support + +To request support for additional regions, complete the [HCP Region Survey](https://hashicorp.sjc1.qualtrics.com/jfe/form/SV_5dzdv5wCixVwnYO). diff --git a/content/hcp-docs/content/docs/index.mdx b/content/hcp-docs/content/docs/index.mdx new file mode 100644 index 0000000000..089f6987e4 --- /dev/null +++ b/content/hcp-docs/content/docs/index.mdx @@ -0,0 +1,8 @@ +--- +page_title: Documentation +description: |- + HashiCorp Cloud Platform (HCP) is a fully managed platform offering HashiCorp products as a service to automate infrastructure on any cloud. +--- + +# HashiCorp Cloud Platform Documentation + diff --git a/content/hcp-docs/content/docs/packer/index.mdx b/content/hcp-docs/content/docs/packer/index.mdx new file mode 100644 index 0000000000..54a23a799f --- /dev/null +++ b/content/hcp-docs/content/docs/packer/index.mdx @@ -0,0 +1,53 @@ +--- +page_title: What is HCP Packer? +description: |- + HCP Packer is a registry that stores metadata associated with artifacts, such as machine images, built by Packer. Learn how HCP Packer helps you create, manage, and consume centralized artifacts. +--- + +# What is HCP Packer? + +HCP Packer stores metadata about the artifacts you build using [HashiCorp Packer](/packer), including when the artifact was created, associated platform, and which Git commit is associated with your build. HCP Packer bridges the gap between artifact creation and deployment by allowing cross-organizational teams to create, manage, and consume artifacts using centralized workflows. + +> **Hands On:** Complete the [Get Started with HCP Packer](/packer/tutorials/hcp-get-started) collection of tutorials to learn how to set up a Packer template, push metadata to the registry, and explore the registry UI. + +## How HCP Packer works + +HCP Packer stores the metadata associated with the artifacts you build but not the artifact. + +The HCP Packer workflow is built around artifact _creators_ and artifact _consumers_. Creators standardize artifact creation. They perform the following actions: + +- **Connect the HCP Packer registry**: Configure the Packer template so that Packer can push the metadata to HCP Packer. + +- **Store artifact metadata:** Build the artifact with Packer and push the metadata to HCP Packer. + +- **Manage the metadata**: Creators can create channels, revoke artifacts, and perform other management tasks to ensure that consumers use appropriate versions and builds. + +Artifact consumers build artifact layers or provision infrastructure by referencing the latest version of artifacts in Packer templates and Terraform configuration files. + +The following diagram illustrates the HCP Packer workflow: + +![Overview of HCP Packer metadata publishing and consumption](/img/docs/packer/hcp_packer_overview.png) + +## HCP Packer benefits + +Using HCP Packer to store artifact metadata in a central registry provides several benefits: + +- Keep track of artifact versions, build new artifacts using the most up-to-date base configuration, and deploy the most up-to-date downstream artifacts. + +- Clearly designate which artifacts are appropriate for test and production environments and query the correct artifacts for use in both Packer and Terraform configurations. + + For example, you can create a `production` channel for artifacts that pass acceptance testing and are ready for production deployment. If an artifact becomes outdated or a security risk, you can [revoke it](/hcp/docs/packer/manage/revoke-restore) to prevent consumers from using it to build new artifacts. You can revoke access to the artifact itself, and you can also revoke all of its descendant artifacts. + +- Find and reference specific artifacts from a specific builder at a specific point in time. HCP Packer automatically tracks each artifact's source artifact. + +- View [ancestry information](/hcp/docs/packer/manage/ancestry) in the UI, which warns you when artifacts are outdated. + +## Tiers + +HCP Packer has an Essentials edition and a paid Standard edition available. Larger teams will benefit from the Standard edition, which provides advanced artifact compliance checks using the [HCP Terraform artifact validation run task](/hcp/docs/packer/store/validate-version), and will continue to add new features that serve more complex organizational requirements and use cases. + +## Community + +Please submit questions, suggestions, and requests to [HashiCorp Discuss](https://discuss.hashicorp.com/c/packer/23). + + diff --git a/content/hcp-docs/content/docs/packer/manage-registry/index.mdx b/content/hcp-docs/content/docs/packer/manage-registry/index.mdx new file mode 100644 index 0000000000..d1931b55c9 --- /dev/null +++ b/content/hcp-docs/content/docs/packer/manage-registry/index.mdx @@ -0,0 +1,63 @@ +--- +page_title: Manage the Packer registry +description: |- + The HCP Packer registry stores your artifact metadata. Learn how to upgrade to the Standard registry edition, deactivate your registry to pause billing, reactivate your registry, and delete your registry to permanently remove all of its data. +--- + +# Manage the Packer registry + +Each HCP Packer project can have one HCP Packer registry that contains all [artifact metadata](/hcp/docs/packer/store). This page explains how to change your HCP Packer tier as well as deactivate, reactivate, and delete an HCP Packer registry. + +## View and Change Registry Tier + +HCP Packer has an Essentials edition and a paid Standard edition available. Larger teams will benefit from the Standard edition, which provides advanced artifact compliance checks using the [HCP Terraform artifact validation run task](/hcp/docs/packer/store/validate-version#automatic-validation), and will continue to add new features that serve more complex organizational requirements and use cases. + +To view your registry tier: + +1. Click **Packer** in the sidebar to go to your HCP Packer registry. +1. Click the **Manage** dropdown and select **Edit Registry** to go to the **Edit Registry** page. The current tier for your registry is highlighted. + +To change your registry tier: +1. Go to the **Edit Registry** page and click a tier to select it. +1. Click **Apply Changes** to change the tier. + +Tier changes take effect immediately, and HCP Packer will begin tracking billable usage. Refer to the [pricing page](/products/packer/pricing) for more details. + +## Deactivate Registry + +HCP Packer also automatically deactivates registries with invalid payment methods or zero trial credits and restores them once these billing issues are resolved. You can also manually deactivate your registry to pause billing. + +Deactivated registries will no longer be able to create new buckets or versions. Packer and Terraform users will receive error messages when they perform operations that request artifact metadata from a deactivated registry. HCP Packer preserves data in deactivated registries for one year. You can [reactivate your registry](#reactivate-registry) at any time within that period. + +!> **Warning:** After one year, your registry will be permanently deleted, and you will no longer be able to recover any of its data. + +To deactivate your registry: + +1. Click **Packer** in the sidebar to go to your HCP Packer registry. +1. Click the **Manage** dropdown and select **Deactivate Registry**. The **Deactivate Packer registry?** box appears. +1. Type **DEACTIVATE** in the box to confirm and then click **Deactivate**. + +Your registry is now marked with a **Deactivated** banner in the HCP Packer UI. + + +## Reactivate Registry + +HCP Packer preserves data in deactivated registries for one year. To reactivate your registry: + +1. Click the **Reactivate Registry** button at the top of the HCP Packer UI. The **Reactivate Packer registry?** box appears. +1. Click **Reactivate**. + +You can now resume pushing metadata to the registry, creating channels, etc. Packer and Terraform consumers can once again reference artifact metadata. + + +## Delete Registry + +!> **Warning:** Once HCP Packer deletes your registry, you will not be able to recover any of its data. + +You must [deactivate your registry](#deactivate-registry) before you can permanently delete it. To permanently delete your registry: + +1. Click **Packer** in the sidebar to go to your HCP Packer registry. +1. Click the **Manage** dropdown and select **Delete Registry**. The **Delete Packer registry?** box appears. +1. Type **DELETE** in the box to confirm and then click **Delete**. + +Your registry has been deleted from HCP Packer. Users and consumers cannot access or recover the data. diff --git a/content/hcp-docs/content/docs/packer/manage/ancestry.mdx b/content/hcp-docs/content/docs/packer/manage/ancestry.mdx new file mode 100644 index 0000000000..3d6550efc7 --- /dev/null +++ b/content/hcp-docs/content/docs/packer/manage/ancestry.mdx @@ -0,0 +1,88 @@ +--- +page_title: View artifact ancestry information +description: Ancestry is the relationship betweeen source and downstream artifacts. Learn how to view and manage artifact ancestry information, such as ancestry status. +--- +# View artifact ancestry information + +This topic describes how to view ancestry information stored in HCP Packer. Ancestry refers to the relationship between source artifacts, or _parents_, and downstream descendants, or _children_. Refer to [Metadata storage overview](/hcp/docs/packer/store) for additional information about ancestry and other HCP Packer concepts. + +## Background + +Each time Packer pushes artifact metadata to the registry, HCP Packer creates an ancestry relationship between a child version and its parent versions. When the Packer template is configured to source artifacts from an HCP Packer channel, you can view the ancestry status in the HCP Packer UI. + +### Ancestry relationships + +The amount of detail HCP Packer can report about ancestry relationships depends on two factors. HCP Pcker can report more information when it is _tracking_ the artifact, which means that the metadata for the artifacts was pushed to HCP Packer. Additionally, assigning parents to a channel and referencing them in your Packer template enables HCP Packer to report when a child is outdated. The following table describes how parent tracking and channel assignments affect ancestry information storage and retrieval: + +| Tracking parent | Parent assigned to channel | Ancestry details | +|-----------------|----------------------------|------------------| +|No|No| HCP Packer stores the parent’s `source_external_identifier`. You can query the HCP Packer API to get this ancestry information. Refer to [View ancestry](#view-ancestry) for details. +|Yes|No |HCP Packer creates an ancestry relationship. The ancestry status is set to `Undetermined` in the UI. You can automatically revoke each version's children. Refer to [Revoke and restore artifact versions](/hcp/docs/packer/manage/revoke-restore) for details. +|Yes|Yes|HCP Packer creates an ancestry relationship and displays the corresponding ancestry status in the UI. The UI warns you when the child is outdated. You can automatically revoke each version’s children.| + + +### Ancestry status + +HCP Packer tracks ancestry status from the channel that the parent version was assigned to when Packer built the children. Refer to [Create and manage channels](/hcp/docs/packer/manage/channel) for additional information. + +The following table describes the following ancestry statuses that HCP can assign to a version: + +| Status | Description | +| ------ | ----------- | +|`Up to date`|The version parent is currently assigned to the channel. The version may not have been built from the latest version of the parent. For example, if you rebuilt the parent without updating the channel, HCP Packer still reports all children as `Up to date`. +|`Out of date`| The version's parent is not currently assigned to the channel. For example, the channel points to the latest version if you recently rebuilt the parent artifact.| +|`Undetermined`|HCP Packer cannot determine the child's status. The following circumstances result in `Undetermined` ancestry status:| +- The child is built but the parent version is not assigned to a channel +- The channel no longer exists in the bucket +- HCP Packer is not tracking the parent. + + +## Requirements + +- Artifacts must be built with Packer v1.8.2 or later. +- To view ancestry information in the UI, the artifact must be built by Packer and the metadata pushed to HCP Packer. +- Parent versions must be assigned to a channel prior to building children. Refer to [Create and manage channels](/hcp/docs/packer/manage/channel) for additional information. +- Packer configurations must use channels to reference parent artifacts. Refer to [Reference artifact metadata](/hcp/docs/packer/store/reference) for additional information. + +## View ancestry + +To view ancestry status from the HCP Packer UI, click on a bucket to open its Overview page. The **Ancestry** section displays each parent and child of the current artifact and their [ancestry status](#ancestry-status). + +### View untracked artifact parent ancestry + +Parent artifacts that are not tracked in HCP Packer are not visible from the UI. You can send a `GET` request to the [`buckets/{bucket_name}/versions/{fingerprint}`](/hcp/api-docs/packer#get-version) API to view ancestry information for artifacts sourced from untracked parents. + +The following example retrieves ancestry information using the fingerprint `01HM6MPATNN8F2MPQM7C5556M8`: + +```shell-session +$ curl –request GET https://api.cloud.hashicorp.com/packer/2023-01-01/organizations/$ORGANIZATION_ID/projects/$PROJECT_ID/buckets/my-bucket/versions/01HM6MPATNN8F2MPQM7C5556M8 --header "authorization: Bearer $HCP_ACCESS_TOKEN" +``` + +The following example response shows the parent artifacts for the child version of fingerprint `01HM6MPATNN8F2MPQM7C5556M8`. You can view the parent's external identifier for each build in `source_external_identifier`: + +```json + +{ + "version": { + // … + "builds": [ + { + "id": "01G9MYAR0KHRHNA0ZSCEAK0G96", + "version_id": "01G9MYAQCSJAXTYYHZP6WE6Z6T", + "component_type": "amazon-ebs.basic-example", + "packer_run_uuid": "98092f7a-11d5-663c-dde9-c0ab67407392", + "artifacts": [ + { + // … + } + ], + "platform": "aws", + "status": "DONE", + "source_external_identifier": "ami-0688ba7eeeeefe3cd" + } + ] + } +} + +``` + diff --git a/content/hcp-docs/content/docs/packer/manage/audit-logs.mdx b/content/hcp-docs/content/docs/packer/manage/audit-logs.mdx new file mode 100644 index 0000000000..89a31a9416 --- /dev/null +++ b/content/hcp-docs/content/docs/packer/manage/audit-logs.mdx @@ -0,0 +1,173 @@ +--- +page_title: Enable audit log streaming +description: |- + Audit logs report events so admins can tracker user activity. Learn how to enable audit log streaming to Amazon CloudWatch and Datadog. +--- + +# Enable audit log streaming + +This topic describes how to enable audit logs to stream to Amazon CloudWatch and Datadog. + +~> **HCP Standard tier required:** Audit Logs are only available for HCP Standard edition registries. [Learn more about HCP Standard](https://www.hashicorp.com/products/packer/pricing). + +## Introduction + +HCP Packer supports near real-time streaming of audit events. Audit logs allow administrators to track user activity and enable security teams to ensure compliance in accordance with regulatory requirements. HCP Packer supports streaming audit logs to Datadog and Amazon CloudWatch. + +The HCP Packer platform stores audit logs for at least one year, and you can access logs for both active and deleted registries. + +## Requirements + +You can only stream to one external account at a time. + +### Amazon CloudWatch + +You must have **AWS ID** and **External ID** from the HCP Packer Audit Logs page within your +[HCP Portal](https://portal.cloud.hashicorp.com/) to set up Amazon CloudWatch. + +### Datadog + +- You must know which region your Datadog account is in. +- You must have a Datadog API key. Refer to the [Datadog documentation](https://docs.datadoghq.com/account_management/api-app-keys/) for information about obtaining an API key. + + +## Amazon CloudWatch + +1. From the HCP Packer **Overview** page, select the **Audit Logs** view. +1. Click **Enable Log Streaming**. +1. From the **Enable audit logs streaming** view, select **Amazon CloudWatch** as the provider. +1. From the **Amazon CloudWatch** as the provider page, keep **AWS ID** and **External ID** handy. You would need that in the next steps. +1. Create an IAM policy and a role. Refer to [AWS setup](#aws-setup) for instructions. +1. Under the **provider**, enter the **Destination name**, **Role ARN** (copied from the previous step), and select the **AWS region** where you are intend to store your data. +1. Click **Test connection** to receive a test event and confirm the connection has been established. +1. Click **Save**. + +Logs should arrive within your Amazon CloudWatch in a few minutes of using Packer. + +HCP Packer dynamically creates the log group and log streams. You can find the log group in your +Amazon CloudWatch console with the prefix `/hashicorp` after setting up your configuration. +This allows you to easily distinguish which logs are coming from HashiCorp. Note that the log group for the test event differs from the log group for actual events. + +Refer to the [AWS documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) for details on log exploration. + +## Datadog + +1. From the HCP Packer **Overview** page, select the **Audit Logs** view. +1. Click **Enable Log Streaming**. +1. From the **Enable audit logs streaming** view, select **Datadog** as the provider. +1. Under the **provider**, enter your **Destination name**, **API Key** and select the **Datadog site region** that matches your existing Datadog environment. +1. Click **Test connection** to receive a test event and confirm the connection has been established. +1. Click **Save**. + +Logs should arrive within your Datadog environment in a few minutes of using Packer. +Refer to the [Datadog documentation](https://docs.datadoghq.com/getting_started/logs/#explore-your-logs) for details on log exploration. + +## Testing streaming configuration + +During the streaming configuration setup, you can test that your streaming configuration works within HCP. Testing verifies that your credentials are correct and that other parameters on the configuration plan work. To test, enter the necessary parameters for the logging provider you wish to test, then click **Test connection** button. + +HCP sends a test message to the logging provider and shares the success or failure status on the **Enable log streaming** page. + +You can also test any updated streaming configurations to ensure they still work as intended. + +## Updating streaming configuration + +After configuring streaming, you could update your configuration for a variety of reasons. You may want to rotate a secret used for your logging provider, or switch logging providers altogether. + +1. Select **Edit streaming configuration** from the **Manage** menu on the **Audit logs** page. +1. If you want to select a new provider, do so now. +1. Enter new parameters for the provider. +1. (Optional) Test the connection by clicking the **Test connection**. +1. Click **Save**. + + +## AWS Setup + +As a part of the Amazon CloudWatch setup, you need to create an IAM Policy and Role. Refer following section to do that +per your preferred method. + +### AWS Management Console +1. [Create a new IAM policy using the AWS Management Console](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html#access_policies_create-start). +1. Choose the JSON option and copy and paste the policy below. + + ```json + { + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "HCPLogStreaming", + "Effect": "Allow", + "Action": [ + "logs:PutLogEvents", + "logs:DescribeLogStreams", + "logs:DescribeLogGroups", + "logs:CreateLogStream", + "logs:CreateLogGroup", + "logs:TagLogGroup" + ], + "Resource": "*" + } + ] + } + ``` + +1. Finish the rest of the setup to create and save policy. +1. [Create a new IAM role with the AWS Management Console](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html). +1. Choose **AWS account** as your trusted entity type. +1. Select **Another AWS account** for the AWS account field. +1. Use **AWS ID** value from the HCP Packer Audit Logs page as your account ID. +1. From options, select **Require external ID**. +1. For External ID, use the **External ID** value from your HCP Packer Audit Logs page. +1. Finish the process of saving and creating your custom role. +1. Attach the policy that you created in the previous steps to this role. +1. Finish the role creation setup. +1. Copy the *ARN* of the role and go back to [Amazon CloudWatch](#amazon-cloudwatch) section to finish rest of the steps. + +### Terraform + +```hcl +data "aws_iam_policy_document" "allow_hcp_to_stream_logs" { + statement { + effect = "Allow" + actions = [ + "logs:PutLogEvents", # To write logs to cloudwatch + "logs:DescribeLogStreams", # To get the latest sequence token of a log stream + "logs:DescribeLogGroups", # To check if a log group already exists + "logs:CreateLogGroup", # To create a new log group + "logs:CreateLogStream" # To create a new log stream + ] + resources = [ + "*" + ] + } +} + +data "aws_iam_policy_document" "trust_policy" { + statement { + sid = "HCPLogStreaming" + effect = "Allow" + actions = ["sts:AssumeRole"] + principals { + identifiers = [""] + type = "AWS" + } + condition { + test = "StringEquals" + variable = "sts:ExternalId" + values = [ + "" + ] + } + } +} + +resource "aws_iam_role" "role" { + name = "hcp-log-streaming" + description = "iam role that allows hcp to send logs to cloudwatch logs" + assume_role_policy = data.aws_iam_policy_document.trust_policy.json + inline_policy { + name = "inline-policy" + policy = data.aws_iam_policy_document.allow_hcp_to_stream_logs.json + } +} +``` \ No newline at end of file diff --git a/content/hcp-docs/content/docs/packer/manage/channel.mdx b/content/hcp-docs/content/docs/packer/manage/channel.mdx new file mode 100644 index 0000000000..099c00d91f --- /dev/null +++ b/content/hcp-docs/content/docs/packer/manage/channel.mdx @@ -0,0 +1,85 @@ +--- +page_title: Create and manage channels +description: Channels are names you can add to artifact versions and reference in your Packer or Terraform configurations. Learn how to create and manage artifact channels in HCP Packer. +--- + +# Create and manage channels + +This topic describes how to create and manage channels in HCP Packer. Channels are human-readable names that artifact creators can assign base artifacts to. Artifact consumers can use channels in their Packer templates or Terraform configurations, instead of hard-coding the artifact versions, so that they automatically use the correct version in their applications. + +> **Hands On:** Complete the [Control artifacts with channels](/packer/tutorials/hcp-get-started/hcp-image-channels) tutorial to get started. + +## Overview + +You can create, update, and delete channels in the HCP Packer UI or with the HCP Packer API. The following outline describes the expected workflow for using channels: + +1. Build a version of the artifact and push the metadata to HCP Packer. Refer to [Push metadata to HCP Packer ](/hcp/docs/packer/store/push-metadata) +1. Create one or more channels in the HCP Packer UI and assign the artifacts to their appropriate channels. +1. Artifact consumers use the channel names in their Packer templates or Terraform configurations. When they build downstream artifacts, Packer or Terraform automatically pull the appropriate version of the artifact. Refer to [Reference artifact metadata](/hcp/docs/packer/store/reference) for additional information. +1. Update the artifact associated with the channel as you release new versions. As a result, consumers automatically reference the correct version on the registry without having to update their code. + +Updating a channel does not automatically notify downstream consumers or trigger downstream Packer builds or Terraform runs. Consumers automatically use the channel’s latest version the next time they execute pipelines that request artifact metadata from that channel. + +### Latest channel + +Every bucket has a `latest` channel by default. This channel is managed by HCP Packer and is automatically updated to the newest unrevoked version available in the bucket. The `latest` channel is restricted by default. You can choose to remove this restriction. For more information on restricted channels, refer to [secure channel access](#secure-channel-access). + +You can use the `latest` channel in Packer templates and Terraform configuration. Since the channel is restricted by default, you will need `Contributor` or `Admin` access to use it. + +You can not change the version assigned to a bucket's `latest` channel or delete the channel. +## Create channels + +You can only assign versions to a channel when they are complete and the registry has assigned them a version number. + +1. From [the HCP Portal](https://portal.cloud.hashicorp.com/), click **Packer** in the **Services** sidebar. +1. Click a bucket in the **Artifact ID** column. +1. Click **Channels** in the sidebar. +1. Click **+ New Channel** and specify a name. You must enter a unique string. Note that `latest` is a reserved channel name that HCP Packer automatically creates. Refer to [Latest channel](#latest-channel) for additional information. +1. Choose a version from the **Assign to a version:** drop-down menu. This version is automatically used to build downstream artifacts when consumers reference the channel name in their Packer templates or Terraform configurations. You can also leave the version blank to create a placeholder for future versions. +1. Enable one of the following options in the **Channel access** field: + - **Unrestricted:** The channel is visible to every member of your organization. + - **Restricted:** The channel is visible to users in your organization that have permission to create, edit, and delete resources. + Refer to [Secure channel access](#secure-channel-access) for additional information. +1. Click **Create channel**. + +## View channel details + +The HCP Packer UI shows information about the artifact versions assigned to your channels. + +1. From [the HCP Portal](https://portal.cloud.hashicorp.com/), click **Packer** in the **Services** sidebar. +1. Click a bucket in the **Artifact ID** column to open its **Overview** screen. +1. Click **Channels** in the sidebar to open the **Channels** screen. +1. Click on a channel to open its **Overview** screen appears. The overview shows the following information: + - Assigned version. + - Assignment history. HCP stores the history for up to one year. The number of entries depends on your subscription tier. Refer to the [Packer pricing page](https://www.hashicorp.com/products/packer/pricing) to learn about available tiers. Refer to [View and Change Registry Tier](/hcp/docs/packer/manage/registry#view-and-change-registry-tier) to upgrade the HCP Packer registry tier. + - Ancestry status. The overview page lets you know when an artifact is outdated. Refer to [View ancestry](/hcp/docs/packer/manage/ancestry) for additional information. + +## Edit and delete channels + +When you delete a channel, HCP Packer also permanently deletes its assignment history. We recommend notifying consumers when making changes to the channel. HCP Packer does not notify consumers about changes. + +1. Go to a bucket and click **Channels** in the sidebar. The **Channels** screen appears with a list of all existing channels in this bucket. +1. Open the ellipses menu for the channel you want to edit or delete. +1. You can perform the following actions: + - **Edit assigned version:** Choose another version and click **Update Channel**. + - **Edit channel access:** Choose a different channel access type and click **Confirm**. + - **Delete Channel:** If you are sure you want to delete this channel, click **Delete**. + +## Restore deleted channels + +To restore a deleted channel, [add a new channel](#create-channels) with the same channel name and assigned version. + +## Secure channel access + +You can restrict channel access to only users with the contributor or admin role for the organization or project. Refer to [Users](/hcp/docs/hcp/iam/users) for additional information. + +Restricted channels enable you to validate and test artifacts before making them available to downstream consumers in unrestricted channels. + +Restricted channels are not visible to users assigned the viewer role. By default, the `latest` channel is created as a +restricted channel. + +## Next steps + +You can use the HCL generator on the HCP Packer channel overview screen to create a data source snippet for the Packer or Terraform configuration languages. We recommend using data sources to retrieve artifact metadata for building children with Packer or deploying an artifact with Terraform. Refer to [Reference artifact metadata](/hcp/docs/packer/store/reference) for instructions. + + diff --git a/content/hcp-docs/content/docs/packer/manage/revoke-restore.mdx b/content/hcp-docs/content/docs/packer/manage/revoke-restore.mdx new file mode 100644 index 0000000000..67f3f12333 --- /dev/null +++ b/content/hcp-docs/content/docs/packer/manage/revoke-restore.mdx @@ -0,0 +1,95 @@ +--- +page_title: Revoke and restore artifact +description: |- + Revoke outdated artifact versions to prevent consumers from accessing their metadata. Learn how to revoke and restore artifact versions in HCP Packer. +--- + +# Revoke and restore artifact versions + +This topic describes how to revoke versions of artifacts that you no longer want to make available to consumers. It also describes how to restore versions that have been revoked. + +> **Hands on**: Complete the [Revoke an Artifact and its Descendents](/packer/tutorials/hcp/revoke-image) tutorial to learn about how revocation works. + +## Introduction + +If an artifact becomes outdated or a security risk, you can revoke the outdated or unsecure version to prevent consumers from using it to build artifacts. HCP Packer marks the version as revoked in the HCP Packer UI. Packer cannot build artifacts from templates that reference a revoked version. + +### Workflows + +You can either revoke artifacts on demand or, if you are on an HCP Packer Standard edition, schedule artifacts to be revoked later. We recommend immediately revoking artifacts that have security vulnerabilities. + +Terraform configurations that reference the revoked artifact version still retrieve metadata, but HCP Packer adds the `revoke_at` attribute set to the timestamp of when the version was revoked. Terraform consumers can use this attribute to validate the version. Refer to [Reference artifact metadata](/hcp/docs/packer/store/reference) for additional information. The HCP Terraform artifact validation run task also scans the configuration and flags any planned resources that reference revoked versions. Refer to [Validate builds](/hcp/docs/packer/store/validate-version) for additional information. + +### Ancestry + +HCP Packer automatically tracks how artifacts are related to each other to trace changes and vulnerabilities from an artifact to all of its descendants. Refer to [Ancestry](/hcp/docs/packer/manage/ancestry) for more details. + +When you revoke an artifact version, you can choose to automatically revoke all of its downstream descendants in HCP Packer. Doing so helps prevent consumers from using outdated artifacts. When an artifact has been revoked, the HCP Packer UI displays information about the revoked status that a child version may inherit from its parent, including a link to the revoked ancestor. + +You can still schedule an earlier revocation date or immediately revoke children that are scheduled to be revoked as a result of their parent's scheduled revocation. Note that a child version may have more than one parent. Refer to [Precedence](/hcp/docs/packer/manage/revoke-restore#precedence) for information about how HCP Packer determines revocation precedence. + +### Precedence + +You can explicitly revoke a child artifact version or revoke its parent so that the child inherits the revocation. An artifact version can have multiple parents. As a result, a child can inherit multiple revocations. When multiple revocations apply, HCP Packer uses the following rules to determine revocation precedence: + +1. **Explicit revocation:** Explicitly revoking the version, either on demand or scheduled, takes precedence over all inherited revocations. If a version is revoked multiple times, the earliest date takes precedence. + +1. **Earliest revocation:** When a child inherits multiple revocations, the earliest revocation date takes precedence. For example, if you schedule ancestor A for revocation at 5 PM and then schedule ancestor B for revocation at 4 PM the same day, HCP Packer revokes the child version at 4 PM. + +## Requirements + +A [Standard edition](https://cloud.hashicorp.com/products/packer/pricing) is required to schedule revocation. Refer to [Manage registry](/hcp/docs/packer/manage-registry) for details about viewing and changing your registry tier. + +## Revoke an artifact version + +1. Click **Versions** in the sidebar to view a list of all versions within a bucket. +1. Open the ellipses menu for the version you want to revoke and choose **Revoke Version**. +1. (Optional) Enter an explanation for revoking the version in the **Reason** field. HCP Packer shows this message on the version details page after the version has been revoked. +1. If you are on a tier that enables you to schedule a revocation, choose **Revoke immediately** from the **When** dropdown menu. +1. Choose **Yes, revoke all descendants** or **No, only revoke version** from the **Revoke descendants?** dropdown menu. Refer to [Ancestry](#ancestry) for additional information. +1. If this version is assigned to a user-created channel, choose **Yes, rollback channel** from the **Rollback channels** dropdown menu to reassign the last valid and unrevoked version to each channel. Otherwise, you must manually un-assign the version from all user-created channels before proceeding. +1. Click **Revoke**. + +You can [restore the version](#restore-a-version) at any time. + +## Schedule an artifact version to be revoked + +You can set a time to live (TTL) on artifacts, which prevents consumers from using outdated artifacts. An HCP Packer Standard edition is required. Refer to [Requirements](#requirements) for additional information. + + +1. Click **Versions** in the sidebar to view a list of all versions within a bucket. +1. Open the ellipses menu for the version you want to revoke and choose **Revoke Version**. +1. (Optional) Enter an explanation for revoking the version in the **Reason** field. HCP Packer shows this message on the version details page after the version has been revoked. +1. Choose **Revoke at a future date** from the **When** dropdown and specify a date and time. Consumers can use this version's metadata until the specified date and time. +1. Choose whether to schedule the revocation for all descendant versions. Refer to [Ancestry](#ancestry) for additional information. +1. If this version is assigned to a user-created channel, choose **Yes, rollback channel** from the **Rollback channel** dropdown menu to reassign the last valid and unrevoked version to each channel when HCP Packer revokes the version. If the channel has a valid version assigned at the time of scheduled revocation, no rollback occurs. +1. Click **Revoke**. + +The HCP Packer UI indicates that the version is scheduled to be revoked on the version details screen and adds a tag on any associated channels. Consumers can continue to use the version until the specified date and time that HCP Packer is scheduled to revoke the version. + +You can cancel the revoke action any time before it occurs. Refer to [Cancel a scheduled revocation](#cancel-a-scheduled-revocation) for additional information. + +At the specified date and time, HCP Packer marks channels that point to revoked versions with a `Revoked` tag in the UI. We recommend notifying consumers and removing the revoked version from all associated channels. + +## Restore a version + +Revoked versions remain available in HCP Packer until you manually delete them from your registry. You can restore them so that their metadata is available to consumers. + +1. Go to the version's details page and click **Restore version**. +1. When prompted, click **Restore version** to confirm that you want to restore the version. + +The restored version metadata is immediately available to consumers. HCP Packer removes the `Revoked` tag in the UI. HCP Packer does not automatically re-add artifacts to channels. As a result, you must manually re-add the artifact to any previously associated channels. + +You cannot restore a version if an ancestor version has been revoked. Restore the revoked ancestor to automatically restore all of its descendants. Refer to [Ancestry](#ancestry) for additional information. + +## Cancel a scheduled revocation + +You can cancel a scheduled revocation any time before the specified date. + +1. Go to the version's details page and click **Cancel scheduled revoke**. +1. When prompted, click **Cancel scheduled revoke** to confirm that you want to prevent the version from being revoked. + +The HCP Packer UI removes the `Scheduled for revoke` status in the UI. + +You cannot cancel a revocation for a child version when an ancestor version is scheduled to be revoked. Cancel the scheduled revocation for the ancestor to automatically cancel the revocation for all of its descendants. Refer to [Ancestry](#ancestry) for additional information. + diff --git a/content/hcp-docs/content/docs/packer/reference/audit-log.mdx b/content/hcp-docs/content/docs/packer/reference/audit-log.mdx new file mode 100644 index 0000000000..39effeb93e --- /dev/null +++ b/content/hcp-docs/content/docs/packer/reference/audit-log.mdx @@ -0,0 +1,456 @@ +--- +page_title: Audit log descriptions and metadata reference +description: |- + Audit logs report events and metadata so admins can tracker user activity. Learn about the descriptions and metadata available in HCP Packer audit logs. +--- + +# Audit log descriptions and metadata + +This topic provides reference information about the audit data HCP Packer logs. + +~> **Requires HCP Standard tier registry**. You must have an HCP Standard subscription to enable audit logs. [Learn more about HCP Standard](https://www.hashicorp.com/products/packer/pricing). + +## Overview + +HCP Packer audit logs contain the following components: + +- `description`: Brief explanation about the event +- `metadata`: Contains information about associated resources, including the `organization`, `project`, and `actor` + + +## Shared metadata fields + +The `metadata` in each audit log is a JSON object. The following metadata fields are in all HCP Packer audit logs. + +Unless the description notes otherwise, all metadata fields return the `string` type. + +| Field | Description | +| ---------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `status` | The state OR outcome of the event for which the audit log is being sent. Returns either "OK" or "FAILED". | +| `action` | The type of the event. Returns "create", "update", "delete", or "read". | +| `description` | A short explanation about the event. Each resource sections covers which description to expect in different scenarios. | +| `organization_id` | The HCP organization ID. | +| `project_id` | The HCP Packer project ID. | +| `timestamp` | The UTC datetime when the event took place. In [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format. For example, `2023-07-12T15:50:02Z` | +| `actor` | The entity (user, service, or internal operator) who initiated the event. This field returns a `JSON` object. | +| `actor.principal_id` | The ID of the actor. | +| `actor.type` | The type of actor. This field returns "TYPE_UNSET", "TYPE_USER", "TYPE_SERVICE", "TYPE_INTERNAL_OPERATOR", or "TYPE_ANONYMOUS". | +| `actor.user.email` | This field is present if the `actor` is "TYPE_USER". | +| `actor.user.name` | This field is present if the `actor` is "TYPE_USER". | +| `actor.user.id` | This field is present if the `actor` is "TYPE_USER". | +| `actor.service.id` | This field is present if the `actor` is "TYPE_SERVICE". | +| `actor.service.name` | This field is present if the `actor` is "TYPE_SERVICE". | +| `actor.service.user_managed` | This field is present if the `actor` is "TYPE_SERVICE" and returns the `bool` data type. | +| `actor.internal_operator.id` | This field is present if the `actor` is "TYPE_INTERNAL_OPERATOR". | +| `error` | If an event fails, this field is available and describes the error. If this field is present, the audit log metadata only returns the fields listed [in the table above](#shared-metadata-fields). | + + +## Bucket events and metadata fields + +HCP Packer sends audit logs for the following events on Bucket and Bucket Labels resources. + +| Event | Description | +| -------------- | --------------------- | +| Created | Created bucket | +| Deleted | Deleted bucket | +| Updated | Updated bucket | +| Created labels | Added bucket labels | +| Updated labels | Updated bucket labels | + +Depending on your event's status, the following fields are available in your audit log's metadata. + +| Field | Description | +| ----------------------- | ------------------------------------------------------------------------------------------------------------------ | +| `registry.id` | The ID of the HCP Packer registry. | +| `bucket.id` | The ID of the bucket. | +| `bucket.name` | User-given name of the Bucket. | +| `bucket.labels` | All labels given to the Bucket while create or update. Data type: `JSON Object` | +| `bucket.new_labels` | Newly added labels while updating the bucket. Data type: `JSON Object`. Present for bucket update event only. | +| `bucket.updated_labels` | Updated existing labels while updating the bucket. Data type: `JSON Object`. Present for bucket update event only. | + +### Example + +```json +{ + "action":"create", + "actor":{ + "principal_id":"test-auditlogs-911479@77f447d4-def0-46f2-bf09-6850d36745ed", + "service":{ + "id":"test-auditlogs-911479@77f447d4-def0-46f2-bf09-6850d36745ed", + "name":"test-auditlogs", + "user_managed":true + }, + "type":"TYPE_SERVICE" + }, + "bucket":{ + "id":"01H5APVEP375TRT23HGH10YTXR", + "labels":{ + "test":"test label" + }, + "name":"bucket-test-2" + }, + "description":"Added bucket labels", + "organization_id":"77f447d4-def0-46f2-bf09-6850d36745ed", + "project_id":"a98c3c31-5760-4db1-b62b-0988080a66ad", + "registry":{ + "id":"01GNZQS84K3PTGVVB2YY9R81BC" + }, + "status":"OK", + "timestamp":"2023-07-14T17:23:21Z" +} +``` + + +## Version events and metadata fields + +HCP Packer sends audit logs for the following events on Version resource. + +| Event | Description | +| -------------------- | ---------------------------- | +| Started | Created version | +| Finished | Completed version | +| Revoked | Revoked version | +| Restored | Restored version | +| Deleted | Deleted version | +| Revocation Scheduled | Scheduled version revocation | +| Revocation Cancelled | Cancelled version revocation | + +Depending on your event's status, the following fields are available in your audit log's metadata. + +| Field | Description | +| -------------------------------------- | ------------------------------------------------------------------------------------------------------------------ | +| `registry.id` | The ID of the HCP Packer registry. | +| `bucket.id` | The ID of the bucket. | +| `bucket.name` | User-given name of the Bucket. | +| `version.id` | ID of the Version. | +| `version.fingerprint` | User-given version identifier. | +| `version.name` | Human-readable name of the version incrementally set when all builds are successful. | +| `version.revoke_at` | Date and time the version was revoked or is scheduled to be revoked. | +| `version.revocation_message` | Message provided by the user when revoking the version or scheduling the version to be revoked. | +| `version.revocation_author` | The actor who revoked the version or scheduled the version to be revoked. | +| `version.status` | Current state of the Version. Possible values: `RUNNING`, `CANCELLED`, `REVOKED`, `REVOCATION_SCHEDULED`, `ACTIVE` | +| `builds` | List of builds built in the version. | +| `builds.id` | ID of the build. | +| `builds.platform` | Plaftorm of the build. For example, `aws` or `azure`. | +| `builds.component_type` | Builder or post-processor used on the build. For example, `amazon-ebs.ubuntu`. | +| `builds.labels` | Labels of the build. Data type: `JSON Object` | +| `builds.artifacts` | The list (array) of artifacts in the build. | +| `builds.artifacts.region` | Region of the artifact. For example, `eu-west-1`. | +| `builds.artifacts.external_identifier` | External identifier of the artifact. For example, `ami-13245456`. | + +### Example + +```json +{ + "action":"update", + "actor":{ + "principal_id":"6f212631-5bcc-48a2-9082-37d752904032", + "type":"TYPE_USER", + "user":{ + "email":"test.user@hashicorp.com", + "id":"6f212631-5bcc-48a2-9082-37d752904032", + "name":"test.user@hashicorp.com" + } + }, + "bucket":{ + "id":"01GXXGSNEE1EMJEZ0TEH7KCQVX", + "name":"bucket-test" + }, + "description":"Revoked version", + "version":{ + "fingerprint":"f2", + "id":"01GXXGWAF8ZKF151591R6YXWEM", + "revocation_author":"test.user@hashicorp.com", + "revocation_message":"test", + "revoke_at":"2023-07-14 17:34:31.196808811 +0000 UTC", + "status":"VERSION_REVOKED", + "name":"v3" + }, +"builds":[ + { + "platform":"aws", + "component_type":"amazon-ebs.ubuntu", + "id":"01H5APPBYYF4D0NMVZCRKR85E7", + "artifacts":[ + { + "external_identifier":"ami-f2", + "region":"us-west-2" + } + ], + "labels":{ + "os":"ubuntu" + } + } +], + "organization_id":"77f447d4-def0-46f2-bf09-6850d36745ed", + "project_id":"a98c3c31-5760-4db1-b62b-0988080a66ad", + "registry":{ + "id":"01GNZQS84K3PTGVVB2YY9R81BC" + }, + "skip_descendants_revocation":true, + "status":"OK", + "timestamp":"2023-07-14T17:34:31Z" +} +``` + + +## Build events and metadata fields + +HCP Packer sends audit logs for the following events on Build resource. + +| Event | Description | +| --------------------------------------------------------- | ------------- | +| Build Started | Created build | +| Build finished successfully _OR_ with an error, timed out | Updated build | + +Depending on your event's status, the following fields are available in your audit log's metadata. + +| Field | Description | +| ------------------------------------- | ---------------------------------------------------------------------------------------------------- | +| `registry.id` | The ID of the HCP Packer registry. | +| `bucket.id` | The ID of the bucket. | +| `bucket.name` | User-given name of the Bucket. | +| `version.id` | ID of the Version. | +| `version.fingerprint` | User-given version identifier. | +| `version.name` | Human-readable name of the version incrementally set when all builds are successful. | +| `version.revoke_at` | Date and time the version was revoked or is scheduled to be revoked. | +| `version.revocation_message` | Message provided by the user when revoking the version or scheduling the version to be revoked. | +| `version.revocation_author` | The actor who revoked the version or scheduled the version to be revoked. | +| `build.id` | ID of the Build. | +| `build.source_external_identifier` | The external identifier of the base layer. For example, `ami-13245456`. | +| `build.source_version_id` | The parent version ID. | +| `build.source_build_id` | The parent build ID. | +| `build.source_channel_id` | The base channel ID if created from the channel. | +| `build.source_channel_name` | The user readable name if the source channel. | +| `build.source_channel_managed` | If the source channel is managed by HCP Packer. For example, the `latest` channel. Data type: `bool` | +| `build.platform` | Plaftorm of the build. For example, `aws` or `azure`. | +| `build.component_type` | Builder or post-processor used on the build. For example, `amazon-ebs.ubuntu`. | +| `build.status` | The current state of the Build. Possible values: `UNSET`, `RUNNING`, `DONE`, `CANCELLED`, `FAILED` | +| `build.labels` | Labels of the build. Data type: `JSON Object` | +| `build.artifacts` | The list (array) of artifacts in the build. | +| `build.artifacts.region` | Region of the artifact. For example, `eu-west-1`. | +| `build.artifacts.external_identifier` | External identifier of the artifact. For example, `ami-13245456`. | +| `build.metadata` | Metadata relating to Packer, its plugins, and the state of the build environment. | + + +### Example + +```json + { + "action":"update", + "actor":{ + "principal_id":"test-auditlogs-911479@77f447d4-def0-46f2-bf09-6850d36745ed", + "service":{ + "id":"test-auditlogs-911479@77f447d4-def0-46f2-bf09-6850d36745ed", + "name":"test-auditlogs", + "user_managed":true + }, + "type":"TYPE_SERVICE" + }, + "bucket":{ + "id":"01GXXGSNEE1EMJEZ0TEH7KCQVX", + "name":"bucket-test" + }, + "build":{ + "platform":"aws", + "component_type":"aws", + "id":"01H5APPBYYF4D0NMVZCRKR85E7", + "artifacts":[ + { + "external_identifier":"ami-f2", + "region":"us-west-2" + } + ], + "metadata": { + “packer” : { + “version”: “1.10.2”, + “plugins”: [ + { + Name: “Azure”, + Version: “2.1.4” + } + ] + } + }, + "labels":{ + "os":"ubuntu" + }, + "status":"DONE" + }, + "description":"Updated build", + "version":{ + "fingerprint":"f14", + "id":"01H5APNAK1BNEVMK3HPS7KZANV", + "name":"v5" + }, + "organization_id":"77f447d4-def0-46f2-bf09-6850d36745ed", + "project_id":"a98c3c31-5760-4db1-b62b-0988080a66ad", + "registry":{ + "id":"01GNZQS84K3PTGVVB2YY9R81BC" + }, + "status":"OK", + "timestamp":"2023-07-14T17:21:09Z" + } +``` + +### Example with an error + +```json +{ + "action":"create", + "actor":{ + "principal_id":"test-auditlogs-911479@77f447d4-def0-46f2-bf09-6850d36745ed", + "service":{ + "id":"test-auditlogs-911479@77f447d4-def0-46f2-bf09-6850d36745ed", + "name":"test-auditlogs", + "user_managed":true + }, + "type":"TYPE_SERVICE" + }, + "bucket":{ + "id":"01GXXGSNEE1EMJEZ0TEH7KCQVX", + "name":"bucket-test" + }, + "description":"Created build", + "error":"rpc error: code = FailedPrecondition desc = This version is complete. If you wish to add a new build a new version must be created by changing the build fingerprint.", + "version":{ + "fingerprint":"f14", + "id":"01H5APNAK1BNEVMK3HPS7KZANV", + "name":"v5" + }, + "organization_id":"77f447d4-def0-46f2-bf09-6850d36745ed", + "project_id":"a98c3c31-5760-4db1-b62b-0988080a66ad", + "registry":{ + "id":"01GNZQS84K3PTGVVB2YY9R81BC" + }, + "status":"FAILED", + "timestamp":"2023-07-14T17:31:11Z" +} +``` + +## Channel events and metadata fields + +HCP Packer sends audit logs for the following events on Channel resource. + +| Event | Description | +| ---------------- | --------------------------- | +| Created | Created channel | +| Deleted | Deleted channel | +| Updated settings | Updated channel | +| Version Assigned | Assigned version to channel | + +Depending on your event's status, the following fields are available in your audit log's metadata. + +| Field | Description | +| ----------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | +| `registry.id` | The ID of the HCP Packer registry. | +| `bucket.id` | The ID of the bucket. | +| `bucket.name` | User-given name of the Bucket. | +| `version.id` | ID of the Version. If a version is assigned to the channel. | +| `version.fingerprint` | User-given version identifier. If a version is assigned to the channel. | +| `version.name` | Human-readable name of the version incrementally set when all builds are successful. If a version is assigned to the channel. | +| `version.revoke_at` | Date and time the version was revoked or is scheduled to be revoked. If a version is assigned to the channel. | +| `version.revocation_message` | Message provided by the user when revoking the version or scheduling the version to be revoked. If a version is assigned to the channel. | +| `version.revocation_author` | The actor who revoked the version or scheduled the version to be revoked. If a version is assigned to the channel. | +| `builds` | List of builds built in the version. | +| `builds.id` | ID of the build. | +| `builds.platform` | Plaftorm of the build. For example, `aws` or `azure`. | +| `builds.component_type` | Builder or post-processor used on the build. For example, `amazon-ebs.ubuntu`. | +| `builds.labels` | Labels of the build. Data type: `JSON Object` | +| `builds.artifacts` | The list (array) of artifacts in the build. | +| `builds.artifacts.region` | Region of the artifact. For example, `eu-west-1`. | +| `builds.artifacts.external_identifier` | External identifier of the artifact. For example, `ami-13245456`. | +| `previous_version.id` | ID of the Version. If a version was previously assigned to the channel. | +| `previous_version.fingerprint` | User-given version identifier. If a version was previously assigned to the channel. | +| `previous_version.name` | Human-readable name of the version incrementally set when all builds are successful. If a version was previously assigned to the channel. | +| `previous_builds` | List of builds built in the version previously assigned to the channel. Present only in the case of a previously assigned version. | +| `previous_builds.id` | ID of the build. | +| `previous_builds.platform` | Plaftorm of the build. For example, `aws` or `azure`. | +| `previous_builds.component_type` | Builder or post-processor used on the build. For example, `amazon-ebs.ubuntu`. | +| `previous_builds.labels` | Labels of the build. Data type: `JSON Object` | +| `previous_builds.artifacts` | The list (array) of artifacts in the build. | +| `previous_builds.artifacts.region` | Region of the artifact. For example, `eu-west-1`. | +| `previous_builds.artifacts.external_identifier` | External identifier of the artifact. For example, `ami-13245456`. | +| `channel.id` | ID of the Channel. | +| `channel.name` | The user readable name of the channel. | +| `channel.author_id` | ID of the actor who create the channel. | +| `channel.managed` | Indicates whether the channel is managed by HCP Packer. HCP Packer-managed channels are also identified as the `latest` channel. Data type: `bool` | +| `channel.restricted` | Indicates whether the channel is restricted. Data type: `bool` | + +### Example + +```json +{ + "action":"update", + "actor":{ + "principal_id":"6f212631-5bcc-48a2-9082-37d752904032", + "type":"TYPE_USER", + "user":{ + "email":"test.user@hashicorp.com", + "id":"6f212631-5bcc-48a2-9082-37d752904032", + "name":"test.user@hashicorp.com" + } + }, + "bucket":{ + "id":"01GTCW6AAS494Z8NYJATA5AM5Z", + "name":"test-channel-history" + }, + "channel":{ + "author_id":"test.user@hashicorp.com", + "id":"01H3FM869DP6WTFF826VTKGZCM", + "managed":false, + "restricted":false, + "name":"fgtj" + }, + "description":"Assigned version to channel", + "version":{ + "fingerprint":"test-fingerprint-0", + "id":"01GTCW6QPQ01BEDZZJ6W66YWG8", + "name":"v1" + }, + "builds":[ + { + "platform":"aws", + "component_type":"amazon-ebs.ubuntu", + "id":"01HP1XWZ1EADV8VVKV6J4VHM6S", + "artifacts":[ + { + "external_identifier":"ami-f3", + "region":"us-west-2" + } + ], + "labels":{ + "os":"ubuntu" + } + } + ], + "organization_id":"77f447d4-def0-46f2-bf09-6850d36745ed", + "previous_version":{ + "fingerprint":"test-fingerprint-1", + "id":"01GTCWC4GD3THGE8A029Y5H5XK", + "name":"v2" + }, + "previous_builds":[ + { + "platform":"aws", + "component_type":"amazon-ebs.ubuntu", + "id":"01H5APPBYYF4D0NMVZCRKR85E7", + "artifacts":[ + { + "external_identifier":"ami-f2", + "region":"us-west-2" + } + ], + "labels":{ + "os":"ubuntu" + } + } + ], + "project_id":"a98c3c31-5760-4db1-b62b-0988080a66ad", + "registry":{ + "id":"01GNZQS84K3PTGVVB2YY9R81BC" + }, + "status":"OK", + "timestamp":"2023-07-14T15:48:36Z" +} +``` diff --git a/content/hcp-docs/content/docs/packer/reference/build-pipeline-metadata.mdx b/content/hcp-docs/content/docs/packer/reference/build-pipeline-metadata.mdx new file mode 100644 index 0000000000..c793c27e8e --- /dev/null +++ b/content/hcp-docs/content/docs/packer/reference/build-pipeline-metadata.mdx @@ -0,0 +1,123 @@ +--- +page_title: Build pipeline metadata reference +description: Learn about the rich metadata associated with your build pipeline that HCP Packer tracks. Tracking build pipeline metadata enables HCP Packer to enhance security and provenance of your build artifacts. +--- + +# Build pipeline metadata reference + +This topic provides reference information about the rich metadata that HCP Packer collects form build pipelines. Refer to [Rich CI/CD pipeline metadata](/hcp/docs/packer/store#rich-ci-cd-pipeline-metadata) for additional information. + +## Overview + +The following table provides an overview of the components that HCP Packer collects build pipeline metadata from: + +| Component | Details captured | +| --- | --- | +| CI/CD |
  • pipeline ID
  • job names
  • runner details
  • | +| Git |
  • remote reference name
  • commit hash
  • commit author
  • flag for uncommitted changes
  • | +| Operating system |
  • OS name
  • architeture
  • version
  • | +| Packer and Packer plugin versions |
  • Packer version
  • plugin name
  • plugin version used in the build
  • | +| Packer build commands | User-executed Packer build command options:
  • `debug`
  • `except`
  • `force`
  • `only`
  • `var`
  • `var-file`
  • HCP Packer does not track sensitive variables specifed in the `var` option.

    | + +## Metadata details + +The following table describes the build pipeline metadata attributes that HCP Packer captures and tracks. Packer stores and build pipeline metadata as JSON. Refer to the [example JSON](#example) for a rendered view of the data: + +| Attribute | Description | Type | +| --- | --- | --- | +| `packer` | Object containing details about the Packer binary used to run the build. | Object | +| `packer.version` | Packer version in semantic version format. | String | +| `packer.plugins` | List of objects that contain the `name` and `version` of each plugin used in the build. | List | +| `packer.plugins.name` | Name of the plugin. | String | +| `packer.plugins.version` | Plugin version in semantic version format. | String | +| `packer.os` | Object containing details about the operating system that the Packer binary ran on to build the artifact. | Object | +| `packer.os.details` | Object containing the operating system architecture and version information. | Object | +| `packer.os.details.arch` | OS architecture | String | +| `packer.os.details.version` | OS version as reported by the system. | String | +| `packer.os.type` | Type of operating system. | String | +| `packer.options` | Object indicating the Packer binary options used to run the build. | Object | +| `packer.options.force` | `true` when Packer runs with the `force` option specified. | Boolean | +| `packer.options.debug` | `true` when Packer runs with the `debug` option specified. | Boolean | +| `packer.options.except` | List of builds and post-processors specified when Packer runs with the `except` option specified. | List | +| `packer.options.only` | List of only the builds and post-processors specified when Packer runs with the `except` option specified. | list | +| `packer.options.vars` | List of variables specified when Packer runs with the `vars` option specified. | List | +| `packer.options.var-files` | List of variable files specified when Packer runs with the `var-files` option specified. | List | +| `packer.options.path` | Path the Packer binary | String | +| `cicd` | Object containing details about the CI/CD pipeline. | Object | +| `cicd.details` | Object containing attributes provided by your CI/CD platform. HCP Packer supports GitHub Actions and GitLab CI/CD. Refer to the documentation for your platform for additional information. | Objects | +| `cicd.details.` | Attributes provided by your CI/CD platform. | String | +| `cicd.type` | Indicates the type of CI/CD pipeline platform. | String | +| `vcs` | Object containing information about the version control system. | Object | +| `vcs.details` | Object containing details retrieved from the VCS. | Object | +| `vcs.details.author` | Name of the person identified as the artifact configuration author. | String | +| `vcs.details.commit` | Commit ID associated with the build. | String | +| `vcs.details.has_uncommitted_changes` | `true` when the artifact was built with uncommitted chanages | Boolean | +| `vcs.details.ref` | Remote reference, such as tag or branch name, associated with the artifact build in the VCS. | String | +| `vcs.details.type` | Type of VCS | String | + + +## Example JSON + +The following example contains rich metadata associated with GitHub actions build pipeline: + +```json +{ + "packer": { + "version": "1.11.2", + "plugins": [ + { + "name": "docker", + "version": "1.0.10" + } + ], + "os": { + "details": { + "arch": "amd64", + "version": "6.5.0-1024-azure" + }, + "type": "linux" + }, + "options": { + "force": true, + "debug": false, + "except": [], + "only": [ + "aws.docker" + ], + "vars": [ + "client_id", + "client_secret" + ], + "var-files": [ + "variables.pkrvars.hcl" + ], + "path": "app.pkr.hcl" + } + }, + "cicd": { + "details": { + "GITHUB_ACTOR": "octo", + "GITHUB_ACTOR_ID": "24626766", + "GITHUB_EVENT_NAME": "push", + "GITHUB_JOB": "metadata-phase-2-poc-linux", + "GITHUB_REF": "refs/heads/poc_gha_env", + "GITHUB_REPOSITORY": "hashicorp/cloud-packer-service", + "GITHUB_REPOSITORY_ID": "362182133", + "GITHUB_SHA": "eba36b80139ba1f42bc3f22760257ba1377ee6fe", + "GITHUB_TRIGGERING_ACTOR": "octo-1", + "GITHUB_WORKFLOW_URL": "https://github.com/hashicorp/cloud-packer-service/actions/runs/10058116707" + }, + "type": "github-actions" + }, + "vcs": { + "details": { + "author": "Octo ", + "commit": "eba36b80139ba1f42bc3f22760257ba1377ee6fe", + "has_uncommitted_changes": false, + "ref": "poc_gha_env" + }, + "type": "git" + } +} +``` + diff --git a/content/hcp-docs/content/docs/packer/reference/hcl2-json.mdx b/content/hcp-docs/content/docs/packer/reference/hcl2-json.mdx new file mode 100644 index 0000000000..302c1d53ed --- /dev/null +++ b/content/hcp-docs/content/docs/packer/reference/hcl2-json.mdx @@ -0,0 +1,33 @@ +--- +page_title: JSON and HCL2 feature reference +description: Packer templates written in HCL2 have different features than templates written in JSON. Compare the features in HCL2 Packer templates and JSON Packer templates. +--- + +# JSON and HCL2 feature reference + +This topic compares functionality available in JSON and HCL2 Packer template configurations. + +## Introduction + +HCP Packer supports storing artifact metadata when the artifacts are built with a Packer template written in either JSON or HCL2. We recommend writing Packer template configurations in HCL2 because it has greater feature support in HCP Packer. Refer to the following topics for additional information about Packer templates and how HCP Packer stores metadata: + +- [Packer templates](/packer/docs/templates/json_to_hcl) in the Packer documentation +- [Metadata storage overview](/hcp/docs/packer/store) + +## Feature comparison + +The following table shows the HCP Packer features supported by each configuration language. + + +| Feature | HCL2 | JSON | +| ------- | ---- | ---- | +|Basic configuration with environment variables |Full Support|Full Support| +|Custom configuration using the `hcp_packer_registry` block|Full Support|No Support| +|Custom bucket description|Full Support|No Support| +|Custom bucket labels|Full Support|No Support| +|Custom version build labels|Full Support|No Support| +|Ability to use HCP Packer data sources|Full Support|No Support| +|HCP Packer artifact governance|Full Support|No Support| +|HCP Packer artifact ancestry tracking|Full Support|Ancestry is only retrievable using the HCP Packer API. Refer to [Ancestry](/hcp/docs/packer/manage/ancestry) for additional information.| + + diff --git a/content/hcp-docs/content/docs/packer/reference/permissions.mdx b/content/hcp-docs/content/docs/packer/reference/permissions.mdx new file mode 100644 index 0000000000..1d7651cf5e --- /dev/null +++ b/content/hcp-docs/content/docs/packer/reference/permissions.mdx @@ -0,0 +1,68 @@ +--- +page_title: HCP Packer Permissions +description: |- + Permissions table for HCP Packer. +--- + +# HCP Packer permissions + +This topic provides reference information about user permissions for HCP Packer. Permissions are role-based access controls (RBAC) inherited from +the HCP organization or HCP project. Refer to the [global user permissions reference](/hcp/docs/hcp/iam/users#user-permissions) +for additional information about HCP RBAC. + +## Introduction + +HCP users have different level of permissions to perform actions in HCP Packer depending on the assigned roles. Users inherit permissions based on +their roles at either the [organization](/hcp/docs/hcp/admin/orgs), [project](/hcp/docs/hcp/admin/projects), or HCP Packer bucket level. + +### Resolution for multiple roles + +When a user account is assigned multiple roles, the permission set from each role is additive. For +example, if `userA` has the HCP project `contributor` role, and is then given the +`viewer` role in HCP Packer `bucketA`, the effective permission for `userA` in `bucketA` is `contributor`. + +In a different scenario, if `userB` has the HCP project `viewer` role, and is then given the +`contributor` role in HCP Packer `bucketA`, the effective permission for `userB` in `bucketA` is `contributor`. + +The effective HCP Packer permissions for the users from both example scenarios are: +- The `userA` has `contributor` [registry permissions](#hcp-packer-permissions) at the project level, and `contributor` [bucket permissions](#hcp-packer-permissions) at the `bucketA` level. +- The `userB` has `viewer` [registry permissions](#hcp-packer-permissions) at the project level, and `contributor` [bucket permissions](#hcp-packer-permissions) at the `bucketA` level. + +## Registry permissions + +The following table describes HCP Packer registry permissions inherited based on +user's role at either the [organization](/hcp/docs/hcp/admin/orgs) or [project](/hcp/docs/hcp/admin/projects) level. + +| HCP Packer registry permissions | No role | Viewer | Contributor | Admin | +| ------------------------------- | :------: | :------: | :---------: | :-----: | +| Create and manage registry | ❌ | ❌ | ✅ | ✅ | +| Create and manage buckets | ❌ | ❌ | ✅ | ✅ | +| Create and manage channels | ❌ | ❌ | ✅ | ✅ | +| Push metadata to HCP Packer | ❌ | ❌ | ✅ | ✅ | +| Revoke and restore artifacts | ❌ | ❌ | ✅ | ✅ | +| Enable audit log streaming | ❌ | ❌ | ✅ | ✅ | +| View HCP Packer resources | ❌ | ✅ | ✅ | ✅ | +| Manage bucket user permissions | ❌ | ❌ | ❌ | ✅ | + +## Bucket permissions + +The following table describes HCP Packer bucket permissions inherited based on user's role at the bucket level. + +| HCP Packer bucket permissions | No role | Viewer | Contributor | Admin | +| ----------------------------- | :------: | :------: | :---------: | :-----: | +| Push metadata to the bucket | ❌ | ❌ | ✅ | ✅ | +| Create and manage channels | ❌ | ❌ | ✅ | ✅ | +| Revoke and restore artifacts | ❌ | ❌ | ✅ | ✅ | +| View bucket | ❌ | ✅ | ✅ | ✅ | +| View restricted channels | ❌ | ❌ | ✅ | ✅ | + +Refer to [Update a bucket's user permissions](/hcp/docs/packer/store/create-bucket#update-a-bucket-s-user-permissions) for instructions about setting +user permissions for buckets. + +## Assign roles to users + +Refer to the [users](/hcp/docs/hcp/admin/users) page to learn how to invite +users and assign roles. + +The [service principals](/hcp/docs/hcp/iam/service-principal) page +describes how to create a service principal. diff --git a/content/hcp-docs/content/docs/packer/reference/run-task.mdx b/content/hcp-docs/content/docs/packer/reference/run-task.mdx new file mode 100644 index 0000000000..0949c04c9a --- /dev/null +++ b/content/hcp-docs/content/docs/packer/reference/run-task.mdx @@ -0,0 +1,67 @@ +--- +page_title: Supported resource types for the HCP Terraform run task +description: The HCP Terraform run task validates that hard-coded artifact versions have not been revoked. Learn about the resource types for AWS, Azure, and GCP the HCP Terraform run task for HCP Packer supports. +--- + +# Supported resource types for the HCP Terraform run task + +This topic provides reference information about resource types that the HCP Terraform run task for HCP Packer supports when used to validate hard-coded machine artifacts. Refer to [Validate artifact builds](/hcp/docs/packer/store/validate-version) for information about using the run task. + +## Run task blocking behavior + +The HCP Packer run task determines if it should fail and block a run, or pass with a warning, depending on how Terraform plan handles the resource you are updating. + +If a Terraform plan creates or recreates a resource with a hardcoded reference to a revoked Packer artifact, the run task fails and blocks the run. + +If a Terraform plan performs an in-place update on a resource with a hardcoded reference to a revoked Packer artifact, the run task passes with a warning but does not block the run. The run task passes even if your plan changes the artifact reference. + +This behavior prevents unrelated changes to your Terraform configuration, such as updating tags on a compute resource, from blocking your run. + +The following are examples of resources that Terraform updates in-place even when you replace the artifact they reference: + +- `aws_launch_template` +- `azurerm_linux_virtual_machine_scale_set` +- `azurerm_windows_virtual_machine_scale_set` +- `azurerm_virtual_machine_scale_set` + +To prevent Terraform operations from using a revoked artifact even when performing an in-place update, we recommend that you do the following: + +- Avoid hardcoding HCP Packer artifact references in your resources. Instead, use the [`hcp_packer_version`](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/data-sources/packer_version) and [`hcp_packer_artifact`](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/data-sources/packer_artifact) data sources to look up the artifacts in HCP Packer. +- Use the [`replace_triggered_by`](/terraform/language/meta-arguments/lifecycle#replace_triggered_by) Terraform lifecycle rule to force Terraform to recreate the resource when you update the artifact reference. + +## Amazon Web Services (AWS) provider + +The run task supports the following AWS provider resources. Refer to the [AWS provider documentation](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) for details about how to configure its resources: + +- `aws_instance` +- `aws_spot_instance_request` +- `aws_launch_template` +- `aws_launch_configuration` +- `aws_ami_launch_permission` +- `aws_emr_cluster` +- `aws_batch_compute_environment` + +## Azure provider + +The run task supports the following Azure provider resources. Refer to the [Azure provider documentation](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs) for details about how to configure its resources: + +- `azurerm_virtual_machine_scale_set` +- `azurerm_linux_virtual_machine_scale_set` +- `azurerm_windows_virtual_machine_scale_set` +- `azurerm_linux_virtual_machine` +- `azurerm_windows_virtual_machine` +- `azurerm_managed_disk` + +## Google Cloud Platform (GCP) provider + +The run task supports the following GCP provider resources. Refer to the [GCP provider documentation](https://registry.terraform.io/providers/hashicorp/google/latest/docs) for details about how to configure its resources: + +- `google_compute_instance` +- `google_compute_machine_image_iam_binding` +- `google_compute_machine_image_iam_member` +- `google_compute_machine_image_iam_policy` +- `google_compute_image_iam_binding` +- `google_compute_image_iam_member` +- `google_compute_image_iam_policy` +- `google_compute_disk` + diff --git a/content/hcp-docs/content/docs/packer/reference/webhook.mdx b/content/hcp-docs/content/docs/packer/reference/webhook.mdx new file mode 100644 index 0000000000..19e0b7f450 --- /dev/null +++ b/content/hcp-docs/content/docs/packer/reference/webhook.mdx @@ -0,0 +1,318 @@ +--- +page_title: Webhook events reference +description: |- + HCP Packer webhook events notify external systems about project resource lifecycle events. Learn about metadata and descriptions you can send in webhook events. +--- + +# Webhook events + +This topic contains reference information about events that you can send in webhook payloads. HCP webhook payloads contain a [set of fields](/hcp/docs/hcp/admin/projects/webhooks#verification-payload) that HCP Packer fills out with important information about a registry lifecycle event. Refer to [Create and manage webhooks](/hcp/docs/hcp/admin/projects/webhooks) for additional information. +For every event, HCP Packer provides the following information: + +- **Resource ID:** The ID of your HCP Packer registry. +- **Resource name:** The resource name of your HCP Packer registry. +- **Event ID:** The unique identifier for the event generated by the services with the format `.event:`. For example, `packer.event:t79BRg8WhTmDPBRM`. +- **Event action:** The type of action of this event. For example, `create`. +- **Event description:** The event description. For example, `Created version`. +- **Event source:** The source of the event. For example, `hashicorp.packer.version`. +- **Event version:** The version of the event payload that is being sent. +- **Event payload:** The payload with the information about the event. Refer to the specific event payload documentation. + +Refer to [Create and manage webhooks](/hcp/docs/hcp/admin/projects/webhooks) for additional information. + +## Version events + +HCP Packer sends webhook payloads for the following version events. The event description links to its payload reference. + +| Action | Description | +| ------------------- | ------------------------------------------------------------- | +| create | [Created version](#created-version) | +| complete | [Completed version](#completed-version) | +| revoke | [Revoked version](#revoked-version) | +| restore | [Restored version](#restored-version) | +| delete | [Deleted version](#deleted-version) | +| schedule-revocation | [Scheduled version revocation](#scheduled-version-revocation) | +| cancel-revocation | [Cancelled version revocation](#cancelled-version-revocation) | +| assign | [Assigned version to channel](#assigned-version-to-channel) | + +### Common version events fields + +Every version lifecycle event payload contains the following fields. + +| Field | Type | Description | +| -------------------------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------- | +| `organization_id` | string | The HCP organization ID. | +| `project_id` | string | The HCP Packer project ID. | +| `registry.id` | string | The ID of the HCP Packer registry. | +| `bucket.id` | string | The ID of the bucket. | +| `bucket.name` | string | User-given name of the bucket. | +| `version.id` | string | ID of the Version. | +| `version.fingerprint` | string | User-given version identifier. | +| `version.name` | string | Human-readable name of the version incrementally set when all builds are successful. | +| `version.revoke_at` | timestamp | Date and time the version was revoked or is scheduled to be revoked. | +| `version.revocation_message` | string | Message provided by the user when revoking the version or scheduling the version to be revoked. | +| `version.revocation_author` | string | The actor who revoked the version or scheduled the version to be revoked. | +| `version.status` | string | Current state of the version. Possible values: `RUNNING`, `CANCELLED`, `REVOKED`, `REVOCATION_SCHEDULED`, `ACTIVE` | +| `actor.principal_id` | string | The ID of the actor. | +| `actor.type` | string | The type of actor. This field returns "TYPE_UNSET", "TYPE_USER", "TYPE_SERVICE", "TYPE_INTERNAL_OPERATOR", or "TYPE_ANONYMOUS". | +| `actor.user.email` | string | This field is present if the `actor` is "TYPE_USER". | +| `actor.user.name` | string | This field is present if the `actor` is "TYPE_USER". | +| `actor.user.id` | string | This field is present if the `actor` is "TYPE_USER". | +| `actor.service.id` | string | This field is present if the `actor` is "TYPE_SERVICE". | +| `actor.service.name` | string | This field is present if the `actor` is "TYPE_SERVICE". | +| `actor.service.user_managed` | bool | This field is present if the `actor` is "TYPE_SERVICE". | +| `actor.internal_operator.id` | string | This field is present if the `actor` is "TYPE_INTERNAL_OPERATOR". | +| `builds` | array | List of builds built in the version. | +| `builds.id` | string | ID of the build. | +| `builds.platform` | string | Plaftorm of the build. For example, `aws` or `azure`. | +| `builds.component_type` | string | Builder or post-processor used on the build. For example, `amazon-ebs.ubuntu`. | +| `builds.labels` | json | Labels of the build. | +| `builds.artifacts` | array | Artifacts built by the build. | +| `builds.artifacts.region` | string | Region of the artifact. For example, `eu-west-1`. | +| `builds.artifacts.external_identifier` | string | External identifier of the artifact. For example, `ami-13245456`. | + +Example payload of a `Completed version` event with the common fields to all events. + +```json +{ + "resource_id": "01HAVMCV8XWW945TNKT2KPYSN1", + "resource_name": "packer/project/ff99bac7-eaec-40a1-8f55-5eb05e789401/registry/01HAVMCV8XWW945TNKT2KPYSN1", + "event_id": "packer.event:MtCpPwmkdPpD8qqfMRhJ", + "event_action": "complete", + "event_description": "Completed version", + "event_source": "hashicorp.packer.version", + "event_version": "1", + "event_payload": { + "actor": { + "principal_id": "ac7295a2-85ef-4594-b4c6-3a1f8b733f1a", + "type": "TYPE_USER", + "user": { + "email": "user@email.com", + "id": "d8f45791-460d-434e-8a40-f627e752276a", + "name": "User Name" + } + }, + "bucket": { + "id": "01HAVMDEAXNF5RYDDSK5R39HDP", + "name": "test" + }, + "version": { + "fingerprint": "01HAVMD1YBM4PA1KHNYFAYJREM", + "id": "01HAVMD63G58XDA8JKS2B8J871", + "revocation_author": "", + "revocation_message": "", + "revoke_at": "", + "status": "ACTIVE", + "name": "v1" + }, + "builds":[ + { + "platform":"aws", + "component_type":"amazon-ebs.ubuntu", + "id":"01H5APPBYYF4D0NMVZCRKR85E7", + "artifacts":[ + { + "external_identifier":"ami-f2", + "region":"us-west-2" + } + ], + "labels":{ + "os":"ubuntu" + } + } + ], + "organization_id": "6a171c1d-c7cd-4047-ba1a-92d686dde2ed", + "project_id": "ff99bac7-eaec-40a1-8f55-5eb05e789401", + "registry": { + "id": "01HAVMCV8XWW945TNKT2KPYSN1" + }, + } +} +``` + +### Created version + +| Action | Source | Description | +| ------ | ------------------------ | --------------- | +| create | hashicorp.packer.version | Created version | + +HCP Packer delivers a webhook payload when a version is created. + +The webhook event payload for `Created version` contains the [common fields](#common-version-events-fields) with the exception of `builds`. Version builds are created after the version is created. + +### Completed version + +| Action | Source | Description | +| -------- | ------------------------ | ----------------- | +| complete | hashicorp.packer.version | Completed version | + +HCP Packer delivers a webhook payload with the [common fields](#common-version-events-fields) when a version is completed. + + +### Revoked version + +| Action | Source | Description | +| ------ | ------------------------ | --------------- | +| revoke | hashicorp.packer.version | Revoked version | + +HCP Packer delivers a webhook payload when a version is revoked. + +The following field is available in the webhook event payload next to the [common fields](#common-version-events-fields). + +| Field | Type | Description | +| ----------------------------- | ---- | ------------------------------------------------------------------------------------------ | +| `skip_descendants_revocation` | bool | This field is `true` when the request for revoking skips revoking the version descendants. | + + +### Restored version + +| Action | Source | Description | +| ------- | ------------------------ | ---------------- | +| restore | hashicorp.packer.version | Restored version | + +HCP Packer delivers a webhook payload with the [common fields](#common-version-events-fields) when a revoked version is restored. + + +### Deleted version + +| Action | Source | Description | +| ------ | ------------------------ | --------------- | +| delete | hashicorp.packer.version | Deleted version | + +HCP Packer delivers a webhook payload with the [common fields](#common-version-events-fields) when a revoked version is deleted. + +### Scheduled version revocation + +| Action | Source | Description | +| ------------------- | ------------------------ | ---------------------------- | +| schedule-revocation | hashicorp.packer.version | Scheduled version revocation | + +HCP Packer delivers a webhook payload when a version is scheduled to be revoked. + +The following field is available in the webhook event payload next to the [common fields](#common-version-events-fields). + +| Field | Type | Description | +| ----------------------------- | ---- | ---------------------------------------------------------------------------------------------------------------------- | +| `skip_descendants_revocation` | bool | This field is `true` when the request for scheduling revocation skips schedule revocation for the version descendants. | + +### Cancelled version revocation + +| Action | Source | Description | +| ----------------- | ------------------------ | ---------------------------- | +| cancel-revocation | hashicorp.packer.version | Cancelled version revocation | + +HCP Packer delivers a webhook payload with the [common fields](#common-version-events-fields) when a version scheduled revocation is cancelled. + +### Assigned version to channel + +| Action | Source | Description | +| ------ | ------------------------ | --------------------------- | +| assign | hashicorp.packer.version | Assigned version to channel | + +HCP Packer delivers a webhook payload when a version is either assigned or unassigned from a channel. + +The following fields are available in the webhook event payload next to the [common fields](#common-version-events-fields). + +| Field | Type | Description | +| ----------------------------------------------- | ------ | ---------------------------------------------------------------------------------------------------------------------------------- | +| `previous_version` | string | The version previously assigned to the channel. Present only if a version was previously assigned to the channel. | +| `previous_version.id` | string | ID of the version. | +| `previous_version.fingerprint` | string | User-given build identifier. | +| `previous_version.name` | string | Human-readable name of the version incrementally set when all builds are successful. | +| `previous_builds` | array | List of builds built in the version previously assigned to the channel. Present only in the case of a previously assigned version. | +| `previous_builds.id` | string | ID of the build. | +| `previous_builds.platform` | string | Plaftorm of the build. For example, `aws` or `azure`. | +| `previous_builds.component_type` | string | Builder or post-processor used on the build. For example, `amazon-ebs.ubuntu`. | +| `previous_builds.labels` | json | Labels of the build. | +| `previous_builds.artifacts` | array | Artifacts built by the build. | +| `previous_builds.artifacts.region` | string | Region of the artifact. For example, `eu-west-1`. | +| `previous_builds.artifacts.external_identifier` | string | External identifier of the artifact. For example, `ami-13245456`. | +| `channel.id` | string | ID of the channel. | +| `channel.name` | string | The user readable name of the channel. | +| `channel.author_id` | string | ID of the actor who create the channel. | +| `channel.managed` | bool | Indicates whether the channel is managed by HCP Packer. HCP Packer-managed channels are also identified as the `latest` channel. | +| `channel.restricted` | bool | Indicates whether the channel is restricted. | + +Example payload of a `Assigned version to channel` event. + +```json +{ + "resource_id": "01HAVMCV8XWW945TNKT2KPYSN1", + "resource_name": "packer/project/ff99bac7-eaec-40a1-8f55-5eb05e789401/registry/01HAVMCV8XWW945TNKT2KPYSN1", + "event_id": "packer.event:fHcFpzWPwWMtmbpGBtLC", + "event_action": "assign", + "event_description": "Assigned version to channel", + "event_source": "hashicorp.packer.version", + "event_version": "1", + "event_payload": { + "actor": { + "principal_id": "ac7295a2-85ef-4594-b4c6-3a1f8b733f1a", + "type": "TYPE_USER", + "user": { + "email": "user@email.com", + "id": "d8f45791-460d-434e-8a40-f627e752276a", + "name": "User Name" + } + }, + "bucket": { + "id": "01HAVMDEAXNF5RYDDSK5R39HDP", + "name": "test" + }, + "channel": { + "author_id": "HCP Packer", + "id": "01HAVMHSBWQ3952KWR50YBHZA4", + "managed": true, + "restricted": true, + "name": "latest" + }, + "version": { + "fingerprint": "01HAVMD1YBM4PA1KHNYFAYJREM", + "id": "01HAVMD63G58XDA8JKS2B8J871", + "name": "v2" + }, + "builds":[ + { + "platform":"aws", + "component_type":"amazon-ebs.ubuntu", + "id":"01HP1XWZ1EADV8VVKV6J4VHM6S", + "artifacts":[ + { + "external_identifier":"ami-f3", + "region":"us-west-2" + } + ], + "labels":{ + "os":"ubuntu" + } + } + ], + "organization_id": "6a171c1d-c7cd-4047-ba1a-92d686dde2ed", + "previous_version": { + "fingerprint": "01HAVMJG6R3KFAN7603RDZCGTC", + "id": "01HAVMJKRSPRPQ30JJEW3Q5084", + "name": "v1" + }, + "previous_builds":[ + { + "platform":"aws", + "component_type":"amazon-ebs.ubuntu", + "id":"01H5APPBYYF4D0NMVZCRKR85E7", + "artifacts":[ + { + "external_identifier":"ami-f2", + "region":"us-west-2" + } + ], + "labels":{ + "os":"ubuntu" + } + } + ], + "project_id": "ff99bac7-eaec-40a1-8f55-5eb05e789401", + "registry": { + "id": "01HAVMCV8XWW945TNKT2KPYSN1" + } + } +} +``` + diff --git a/content/hcp-docs/content/docs/packer/store/create-bucket.mdx b/content/hcp-docs/content/docs/packer/store/create-bucket.mdx new file mode 100644 index 0000000000..8984f71a8c --- /dev/null +++ b/content/hcp-docs/content/docs/packer/store/create-bucket.mdx @@ -0,0 +1,64 @@ +--- +page_title: Create and manage buckets +description: |- + HCP Packer buckets are repositories for metadata about each artifact built with Packer. Learn how to create and manage artifact metadata buckets in HCP Packer. +--- +# Create and manage buckets + +This topic describes how to create and manage artifact buckets in the HCP Packer registry. A bucket is a repository that stores information about each artifact that is built with Packer. Refer to [Metadata storage overview](/hcp/docs/packer/store) for additional information about constructs in HCP Packer for storing metadata. + +## Introduction + +Buckets can contain artifact metadata for machine images or containers from multiple providers. For example, a golden image for Amazon Web Services (AWS) may exist in multiple regions, or you may have an equivalent Azure image containing the same software. If you define these images in the same Packer template, the registry stores their metadata in the same bucket. + +## Create a bucket + +HCP Packer automatically creates buckets the first time you use the `packer build` command to build a template. + +To create a new bucket from an existing Packer template, specify a new value in the `hcp_packer_registry.bucket_name` field in the Packer template configuration. The next time you build the template, the HCP Packer creates a new bucket associated with the template. Refer to [Push artifact metadata to HCP Packer](/hcp/docs/packer/store/push-metadata) for additional information. + +You can also set the `HCP_PACKER_BUCKET_NAME` environment variable when building the Packer template. The environment variable overrides the bucket configured in the template. HCP Packer creates a new bucket if the value specified with the variable does not already exist. + +Note that the environment variable is required if you are building Packer templates written in JSON. This is because JSON templates do not support the `hcp_packer_registry` configuration block. Refer to [JSON and HCL2 feature reference](/hcp/docs/packer/reference/hcl2-json) for additional information. +## View a bucket + +You can view information associated with a bucket from the HCP Packer UI. + +1. Click **Packer** in the sidebar. The HCP Packer page appears, listing all of the buckets in the project. +1. Click on a bucket ID. The overview page for the bucket appears. This page shows the bucket description, the artifact ID from the latest version, and any custom labels. +1. Click **Versions** in the sidebar to view details about each version. +1. Click **Channels** to view details about associated channels. Refer to [Create and manage channels](/hcp/docs/packer/manage/channel) for additional information. + +## Edit a bucket + +To edit the metadata contained in a bucket, modify the values specified in the `hcp_packer_registry` template block of your Packer template configuration. During the next build, HCP Packer overwrites the old values on the registry. + +Note that you can only modify the bucket’s metadata, not the existing versions contained in the bucket, which are immutable. Existing versions retain their metadata even after updating the template file. + +When you change the bucket’s name, the registry creates a new bucket with the new name for your template and stores all future artifact metadata in the new bucket, but it does not delete the old bucket. + +## Update a bucket's user permissions + +You can update a user's permission at the bucket level through the [Terraform HashiCorp Cloud Platform (HCP) Provider](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs) +using the [policy resource](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/packer_bucket_iam_policy) +or [binding resource](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/packer_bucket_iam_binding) +to assign a role to the principal of your choice. + +You can allow users, service principals, and groups to contribute to a bucket without modyfing their project-level viewer permissions to the rest of +the HCP Packer registry. Refer to the [HCP Packer permissions reference](/hcp/docs/packer/reference/permissions) for more information about user permissions for HCP Packer. + + +Service principals with contributor access to the bucket level but viewer access to the registry level require Packer v1.11.1 or later to push metadata to HCP Packer. + + +## Delete a bucket + +You can permanently delete a bucket from the HCP Packer UI: + +1. Go to the HCP Packer homepage and open the ellipses menu for the bucket you want to delete. +1. Click **Delete bucket** when prompted. + +The bucket and all of its data are permanently removed from the HCP Packer registry. + + + diff --git a/content/hcp-docs/content/docs/packer/store/index.mdx b/content/hcp-docs/content/docs/packer/store/index.mdx new file mode 100644 index 0000000000..c12196ee53 --- /dev/null +++ b/content/hcp-docs/content/docs/packer/store/index.mdx @@ -0,0 +1,58 @@ +--- +page_title: HCP Packer metadata storage overview +description: |- + Store metadata about the artifacts you build in HCP Packer so that you can track updates and help platform teams use up-to-date artifacts in their deployments. +--- +# Metadata storage overview + +This topic provides an overview of how HCP Packer stores metadata about the artifacts you build with Packer. + +## Workflow + +HCP Packer stores metadata about your Packer artifacts so that you can track updates, use the most up-to-date base artifacts, and deploy the most up-to-date downstream artifacts. The following process describes how HCP Packer acquires metadata from Packer builds: + +1. Create a bucket in the HCP Packer registry to store artifacts. HCP Packer automatically creates a bucket when you build the artifact if a bucket does not already exist. Refer to [Create and manage buckets](/hcp/docs/packer/store/create-bucket) for additional information. +1. Configure the Packer template or set environment variables that enable you to push artifact metadata in HCP Packer. To push metadata using a Packer template, the template must include the `hcp_packer_registry` build block. Refer to [Push metadata to HCP Packer](/hcp/docs/packer/store/push-metadata) for additional information. +1. Run Packer and specify the template to build the artifact and generate metadata. Packer pushes the metadata to the registry in HCP Packer. Refer to the [Packer documentation](/packer/docs/commands) for instructions on how to run Packer commands. + +### Metadata organization hierarchy + +Each HCP Packer project in your HCP organization has one Packer _registry_. The registry contains one or more _buckets_. Each bucket contains the _version_ and _build_ information associated with the artifacts built from a Packer template. Additionally, you can assign versions to _channels_, which are human-readable names that consumers can reference in Packer templates and Terraform configurations. Refer to [Metadata concepts](#metadata-concepts) for additional information about each construct. + +### Rich CI/CD pipeline metadata + +If you build artifacts using Packer v1.11.2 or later, HCP Packer also tracks rich metadata associated with your build pipeline, such as CI/CD platform, version control system, operating system, and Packer build command options. + +HCP Packer tracks rich metadata in order to provide comprehensive metadata for each Packer build. This metadata enhances the security and provenance of artifacts, ensuring detailed information about the build environment and process is recorded and accessible. + +It also provides traceability and helps you achieve compliance with supply-chain levels for software artifacts (SLSA) L1 standards. Refer to the [SLSA documentation](https://slsa.dev/spec/v1.0/levels) for additional information. + +Refer to the [Build pipeline metadata reference](/hcp/docs/packer/reference/build-pipeline-metadata) for details about the metadata HCP Packer tracks. + +## Metadata concepts + +The metadata stored in HCP Packer is organized around the following concepts. + +### Buckets + +HCP Packer stores artifact metadata for each version of the artifact in a _bucket_. Each HCP Packer registry has one or more buckets that map to a Packer template. Refer to [Create and manage buckets](/hcp/docs/packer/store/create-bucket) for additional information. + +### Versions + +Every time you build a Packer template, the registry creates a new artifact _version_ in the associated bucket. A version is an immutable record generated by `packer build` command that contains the metadata for all of the builds in the template. + +Versions let you track revisions and revocations of artifacts over time. Each complete version has at least one build, but a version may have many builds depending on how you configured sources in your template. + +### Builds + +Each version has at least one _build_ that contains the metadata from all artifacts produced by a single builder. By default, HCP Packer stores an artifact ID and a creation date, but individual builders may also produce additional information for the artifact. The registry adds this information as auto-generated labels to each completed build. Refer to [Builders](/packer/docs/builders) in the Packer documentation for additional information. + +### Ancestry + +_Ancestry_ refers to the relationship between source artifacts, or parents, and the child artifacts created from the source. Depending on whether you are using registry channels and whether HCP Packer is configured to track parent artifacts, HCP Packer creates an ancestry relationship between new child artifact versions and its source artifact when Packer pushes artifact metadata to the registry. Refer to [View ancestry](/hcp/docs/packer/manage/ancestry) for additional information. + +### Channels + +Channels are names that you can assign to versions. People that consume your artifacts can include the channel name in their Packer template or Terraform configurations so that they use the correct version without modifying their code. Refer to [Create and manage channels](/hcp/docs/packer/manage/channel) for additional information. + + diff --git a/content/hcp-docs/content/docs/packer/store/push-metadata.mdx b/content/hcp-docs/content/docs/packer/store/push-metadata.mdx new file mode 100644 index 0000000000..169f26a2a2 --- /dev/null +++ b/content/hcp-docs/content/docs/packer/store/push-metadata.mdx @@ -0,0 +1,169 @@ +--- +page_title: Push artifact metadata to HCP Packer +description: Learn how to push artifact metadata into buckets in the HCP Packer registry using either environment variables or by configuring the Packer template. +--- + +# Push artifact metadata to HCP Packer + +This topic describes how to push metadata to HCP Packer when building artifacts with Packer. For information about metadata storage in HCP Packer, refer to [Metadata storage overview](/hcp/docs/packer/store). For information about Packer templates, refer to [Packer Templates](/packer/docs/templates) in the Packer documentation. + +## Overview + +There are two methods for enabling Packer to push artifact metadata to HCP Packer when you run the `packer build` command. You can either set environment variables on the local machine or you can define the push behavior in the Packer template configuration. + +### Environment variable + +You can use environment variables with both JSON and HCL2 templates. This lets you push artifact metadata to the HCP Packer registry without making any changes to your Packer template. This is the only way to push metadata to the HCP Packer registry from a JSON template. Refer to [Environment variables method](#environment-variable-method) for details. + +For additional information about JSON and HCL2 templates, refer to the [JSON and HCL2 Feature Comparison](/hcp/docs/packer/reference/hcl2-json). + +### Packer template configuration + +We recommend configuring the `hcp_packer_registry` block in your Packer template configuration instead of using environment variables. The `hcp_packer_registry` block lets you customize the artifact metadata and add labels that communicate important details about the artifact. Refer to [Packer template configuration method](#packer-template-configuration-method) for details. + +### Version fingerprint + +When you build an artifact, Packer automatically generates a _version fingerprint_ and associates it with the artifact. The version fingerprint is a unique identifier that determines if metadata for a template build on the registry is complete. The build fails when the fingerprint matches an existing, complete version. Refer to [Version Fingerprinting](/packer/docs/hcp#iteration-fingerprinting) for additional information. + +## Requirements + +You must install Packer v1.7.6 or later on the local machine. + +### Plugins + +Packer relies on plugins to perform many tasks. Refer to [Packer integrations](/packer/integrations?flags=hcp-ready). +### HCP authentication + +Packer uses environment variables to authenticate with the HCP Packer registry and push artifact metadata to a particular organization and project. Set the following environment variables to a service principal key: + +- `HCP_CLIENT_ID` +- `HCP_CLIENT_SECRET` + +Refer to [Service Principals](/hcp/docs/hcp/iam/service-principal) for additional information. +## Environment variable method + +1. Set the following environment variables on the local machine to enable Packer to push metadata to the HCP Packer registry without modifying the Packer template: + + - Set the `HCP_PACKER_BUCKET_NAME` environment variable to the name of the bucket where you want the registry to store metadata for this Packer build. + - If you are using Packer v1.7.6 - v1.8.3, you must also set the `HCP_PACKER_REGISTRY` to `ON`. + + HCP Packer creates this bucket if it does not already exist. Otherwise, HCP Packer creates a new version inside of the bucket. + +1. Run the `packer build` command to build the artifact and push the metadata to the registry. + +### Environment variables example + +The following environment variable specifies that Packer should push artifact metadata to the `example-amazon-ebs` bucket in the HCP Packer registry: + + + +```hcl +packer { + required_plugins { + amazon = { + version = ">= 1.0.1" + source = "github.com/hashicorp/amazon" + } + } +} + +source "amazon-ebs" "basic-example-east" { + region = "us-east-2" + source_ami_filter { + filters = { + virtualization-type = "hvm" + name = "ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*" + root-device-type = "ebs" + } + owners = ["099720109477"] + most_recent = true + } + instance_type = "t2.small" + ssh_username = "ubuntu" + ssh_agent_auth = false + ami_name = "packer_AWS_{{timestamp}}" +} + +build { + name = "example-amazon-ebs" + sources = [ "source.amazon-ebs.basic-example-east"] +} +``` + +```json +{ + "builders": [ + { + "type": "amazon-ebs", + "name": "basic-example-east", + "ami_name": "packer_AWS_{{timestamp}}", + "region": "us-east-2", + "source_ami_filter": { + "filters": { + "virtualization-type": "hvm", + "name": "ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*", + "root-device-type": "ebs" + }, + "owners": [ + "099720109477" + ], + "most_recent": true + }, + "instance_type": "t2.small", + "ssh_username": "ubuntu", + "ssh_agent_auth": false + }] + } +``` + + + + +```shell-session +$ export HCP_PACKER_BUCKET_NAME=example-amazon-ebs +``` + +When Packer builds the following example template, it produces a single Amazon EBS artifact and pushes the metadata to a registry named `example-amazon-ebs` as specified in the `build.name` field. + +HCP Packer creates a new verison in the `example-amazon-ebs` bucket called `v1` to denote that it is the first version of a completed artifact build. The `v1` version contains a unique version fingerprint that you can use to identify the artifact version in the UI and HCP Packer data sources. + +## Packer template configuration method + +Add the `hcp_packer_registry` block to your Packer template and configure custom metadata for Packer to push to the registry when building the artifact. You can only configure the `hcp_packer_registry` block in HCL2 Packer templates. Refer to the [`hcp_packer_registery` configuration reference](/packer/docs/templates/hcl_templates/blocks/build/hcp_packer_registry) in the Packer documentation for details about configuring the block. + +The values in the `hcp_packer_registry` block override the default values Packer derives from the `build` block. For example, the value specified in `hcp_packer_registry.bucket_name` overrides the `build.name` value. + +You can also add a `description` and the following types of custom labels to communicate important details about the artifact: + +- `bucket_labels`: Specifies an arbitrary map of key-value pairs of strings. Use bucket labels to help you identify characteristics common to a set of artifacts, such as identifying which team maintains the Packer template and which operating system the associated artifacts use. HCP Packer applies custom bucket labels to an entire bucket. +- `build_labels`: Specifies an arbitrary map of key-value pairs of strings. Use build labels to help you provide details about artifact characteristics within a particular version. For example, build labels may identify the precise time of the build or the versions of the tools included in a build, providing an immutable record of these details for future consumers. HCP Packer applies custom build labels to all of the completed builds within a version. + +### Template configuration example + +The following example Packer template configuration includes metadata that describes the purpose of the purpose of the build. It also includes bucket and build labels so that consumers know which team is responsible for the bucket, the operating system associated with all builds, associated software versions, and a timestamp for the build. + +```hcl +build { + name = "amazon-golden" + sources = [ "source.amazon-ebs.basic-example-east"] + + hcp_packer_registry { + bucket_name = "example-amazon-ebs-custom" + description = "Golden image for Amazon-backed applications" + + bucket_labels = { + "team" = "amazon-development", + "os" = "Ubuntu" + } + + build_labels = { + "python-version" = "3.9", + "ubuntu-version" = "Xenial 16.04" + "build-time" = timestamp() + } + } +} +``` + +HCP Packer creates a new verison in the `example-amazon-ebs-custom` bucket that contains all of the metadata specified in the template when someone builds the artifact. + diff --git a/content/hcp-docs/content/docs/packer/store/reference.mdx b/content/hcp-docs/content/docs/packer/store/reference.mdx new file mode 100644 index 0000000000..200d126403 --- /dev/null +++ b/content/hcp-docs/content/docs/packer/store/reference.mdx @@ -0,0 +1,133 @@ +--- +page_title: Reference artifact metadata from Packer templates and Terraform configurations +description: |- + Learn how to reference artifact metadata stored in the HCP Packer registry in Packer templates and Terraform configurations. +--- + +# Reference artifact metadata from Packer templates and Terraform configurations + +This topic describes how to configure Terraform configurations and Packer templates to reference artifact metadata stored in the HCP Packer registry. Refer to [Push metadata to HCP Packer](/hcp/docs/packer/store/push-metadata) for information about storing metadata in HCP Packer. + +## Overview + +You can configure Packer templates and Terraform configuration files to reference artifact metadata stored in the HCP Packer registry. Add a data source to the template or configuration that specifies an HCP Packer channel associated with the metadata. + +Channels are human-readable names that artifact consumers can use in their Packer templates or Terraform configurations, instead of hard-coding the artifact versions so that they automatically use the correct version in their applications. Refer to [Create and manage channels](/hcp/docs/packer/manage/channel) for additional information. + +You can manually configure the files or use the HCL generator in the HCP Packer UI to generate HCL that you can copy and paste into your template or configuration. + +When you run the template or configuration, Packer or Terraform builds downstream artifacts from the golden artifact that has metadata on the HCP Packer registry. Using these data sources may result in a billable request depending on your pricing plan. + +> **Hands On:** Complete the following tutorials to get started: + - [Create Child Image from Registry Image](/packer/tutorials/hcp-get-started/hcp-create-child-image) + - [Control Image with Channels](/packer/tutorials/hcp-get-started/hcp-image-channels) + +## Configure a Packer template + +1. Add the following data sources to your Packer template: + + - `hcp-packer-version`: Specifies a bucket name and channel in the HCP Packer registry to retrieve the version metadata. Refer to the [`hcp-packer-version` configuration reference](/packer/docs/datasources/hcp/hcp-packer-version) for details. + - `hcp-packer-artifact`: Specifies the bucket name, version ID, and other parameters that enable Packer to retrieve the metadata. Refer to the [`hcp-packer-artifact` configuration reference](/packer/docs/datasources/hcp/hcp-packer-artifact) for details. + +1. Pass the metadata in a `source` block so that you can build child artifacts from the base artifact. The `source` data block lets you build new artifacts on top of the most recent approved version of an existing image. Refer to the [Packer data source documentation](/packer/docs/datasources/hcp) for a full list of arguments and configuration options. + +In the following example, the template retrieves the AMI ID in `us-west-2` and uses it as a base artifact for downstream builds. + +```hcp hideClipboard +# Create local and get artifact id from the base artifact +# Retrieve metadata from the production artifact channel +data "hcp-packer-artifact" "secondary-source" { + bucket_name = "learn-packer-ubuntu" + channel_name = "production" + platform = "aws" + region = "us-west-2" +} + +# Set the `source_ami` to the base artifact id +source "amazon-ebs" "packer-secondary" { + source_ami = data.hcp-packer-artifact.secondary-source.id + ... +} +``` + +Refer to [HCL generator](#hcl-generator) for instructions on how to use the HCP Packer UI to generate configuration for your Packer template. + +## Configure a Terraform configuration file + +You can use the HCP Terraform provider data source to retrieve artifact metadata and reference it in your Terraform configuration. Refer to the [HCP Terraform provider documentation](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs) for details. + +Add the provider to your Terraform configuration and specify the following parameters: + +- `hcp_packer_version`: Specifies a bucket name and channel in the HCP Packer registry to retrieve the version metadata. Refer to the [`hcp_packer_version` configuration reference in the Terraform registry](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/data-sources/packer_version) for details. + +- `hcp_packer_artifact`: Specifies the version ID and channel name to retrieve metadata from the HCP Packer registry. Refer to the [`hcp_packer_artifact` configuration reference in the Terraform registry](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/data-sources/packer_artifact) for details. + +In the following example, the Terraform HCP provider retrieves the AMI ID from `us-west-2` and uses it to provision an EC2 instance. Refer to the Terraform documentation for more information about [data sources](/terraform/docs/language/data-sources) and working with [providers](/terraform/docs/language/providers). + +```hcp hideClipboard +terraform { + required_providers { + hcp = { + source = "hashicorp/hcp" + version = ">= 0.81.0" + } + + aws = { + source = "hashicorp/aws" + version = "~> 3.52.0" + } + } + + required_version = ">= 0.14.9" + +} + +# Create local variable and get artifact id from the base artifact +# Retrieve metadata from the production artifact channel +data "hcp_packer_artifact" "ubuntu-aws-west" { + bucket_name = "learn-packer-ubuntu" + channel_name = "production" + platform = "aws" + region = "us-west-2" +} + +provider "aws" { + profile = "default" + region = "us-west-2" +} + +# Provision an EC2 instance with the HCP Packer artifact +resource "aws_instance" "app_server" { + ami = data.hcp_packer_artifact.ubuntu-aws-west.external_identifier + instance_type = "t2.micro" + + tags = { + Name = "ExampleAppServerInstance" + } +} +``` + +Refer to [HCL generator](#hcl-generator) for instructions on how to use the HCP Packer UI to generate configuration for your Terraform configuration. + +## HCL generator + +HCP Packer can generate the HCL configuration to retrieve metadata from each bucket in the registry. Specify the channel, platform, and region in the UI and then paste the auto-generated code into your Packer or Terraform template. + + +1. Click **Packer** to open the HCP Packer registry page. + +1. Click a bucket to open its **Overview** page. + +1. Choose an option under **Use as a data source**: + - **Use with Terraform** to generate code for Terraform data sources + - **Use with Packer** to generate configuration for Packer data sources + +1. Choose the **artifact channel**, **platform**, and **region** for the artifact you want to reference. HCP Packer autogenerates HCL configuration based on your selections. + +1. Click **Copy code** to copy the configuration to your clipboard. + +1. Paste the autogenerated HCL code into the `source` block of your Terraform or Packer template. + +## Deleted or deactivated registries + +Consumers receive an error when referencing metadata from a deactivated or deleted registry. An administrator may manually deactivate or delete the registry or HCP Packer may automatically deactivate it because of billing issues. Contact [HashiCorp Support](https://support.hashicorp.com/) with questions. diff --git a/content/hcp-docs/content/docs/packer/store/sbom.mdx b/content/hcp-docs/content/docs/packer/store/sbom.mdx new file mode 100644 index 0000000000..3a78007d25 --- /dev/null +++ b/content/hcp-docs/content/docs/packer/store/sbom.mdx @@ -0,0 +1,63 @@ +--- +page_title: Store artifact software bill of materials +description: |- + Learn how to associate SBOM files with artifact versions in the HCP Packer registry. +--- + +# Manage artifact software bill of materials + +This topic describes how to upload software bill of materials (SBOM) files and associate them with an artifact version in the HCP Packer registry. + +## Requirements + +A [Standard edition registry](https://cloud.hashicorp.com/products/packer/pricing) is required to upload and download SBOM files. Refer to [Manage registry](/hcp/docs/packer/manage-registry) for details about viewing and changing your registry tier. + +## Overview + +A software bill of materials stores a reference of an artifact's package metadata, and is useful to help with security and compliance audits. You can upload existing SBOM files to HCP Packer and associate them with an artifact version with the [`hcp-sbom` provisioner](/packer/docs/provisioners/hcp-sbom) or the [HCP Packer registry API](/hcp/api-docs/packer). + +## Create a software bill of materials + +Packer does not generate SBOM files, so you must use a third-party tool to create them. HCP Packer requires SBOM files to be in either [SPDX](https://spdx.github.io/spdx-spec/latest) or [CycloneDX](https://cyclonedx.org/) format. For an example Packer template that uses the provisioner, refer to the [Track Packer artifact package bill of materials](/packer/tutorials/hcp/track-artifact-package-metadata) tutorial. + +## Upload the software bill of materials + +You can upload SBOM files to the HCP Packer registry using either the `hcp-sbom` provisioner or by using the HCP Packer API. + +### Upload using the provisioner + +You can use the `hcp-sbom` provisioner in your Packer template to upload an SBOM from your artifact to the HCP Packer registry. + +1. Add the `hcp-sbom` provisioner to your Packer template, for example: + + ```hcl + provisioner "hcp-sbom" { + source = "/tmp/sbom-cyclonedx-0.3.json" + } + ``` + + Refer to the [`hcp-sbom` provisioner reference](/packer/docs/provisioners/hcp-sbom) for more information. + +1. Run the `packer init` command to install the provisioner. +1. Run `packer build` to upload the SBOM file. + +### Upload using the API + +Refer to the [`UploadSboms` API reference](/hcp/api-docs/packer#PackerService_UploadSbom) for more information on this API endpoint + +## Download artifact software bill of materials + +You can view information about an SBOM and download SBOM files from the HCP Packer registry using the UI or the API. + +### View packages in HCP Terraform + +In the artifact version build, HCP Packer lists all packages, their version, and the SBOM they're included in under the **Packages** section. You can filter packages by name with the **Search package names** text box. + +### Download from the UI + +1. Open the artifact version overview page. +1. Click the **Download SBOM** drop-down and choose the SBOM you want to download. + +### Download using the API + +Send a `GET` request to the `/GetSbom` HCP Packer API endpoint to download SBOM files using the [HCP Packer registry API](/hcp/api-docs/packer#PackerService_GetSbom). diff --git a/content/hcp-docs/content/docs/packer/store/validate-version.mdx b/content/hcp-docs/content/docs/packer/store/validate-version.mdx new file mode 100644 index 0000000000..7708139e6b --- /dev/null +++ b/content/hcp-docs/content/docs/packer/store/validate-version.mdx @@ -0,0 +1,105 @@ +--- +page_title: Validate artifact versions referenced in Terraform configurations +description: Validate artifact versions to avoid referencing revoked versions in your Terraform configuration. Learn how to validate artifact versions with Sentinel or the HCP Terraform run task for HCP Packer. +--- + +# Validate artifact versions referenced in Terraform configurations + +This topic describes how to validate that the HCP Packer artifacts referenced in your Terraform configuration have not been revoked. Administrators can revoke artifact versions that have become outdated or that pose a security risk. Refer to [Revoke and restore artifacts](/hcp/docs/packer/manage/revoke-restore) for additional information. + +## Overview + +You can manually validate artifacts using the Sentinel policy-as-code framework or set up the HCP Terraform run task for HCP Packer to automatically validate artifact versions. + +- **Manual validation**: To manually validate artifacts, define a Sentinel policy that checks for revoked artifacts. +- **Automatic validation**: Set up the HCP Terraform run task for HCP Packer to check your Terraform configuration references for revoked artifacts. + +> **Hands on**: Complete the following tutorials for guidance on how to set up and test the HCP Terraform run task integration: + - [Identify compromised artifacts with HCP Terraform](/packer/tutorials/hcp/run-tasks-data-source-image-validation) + - [Set Up HCP Terraform Run Task for HCP Packer](/packer/tutorials/hcp/setup-tfc-run-task) + +## Requirements + +- Manual validation requires the following software versions: + - Terraform HCP provider 0.33.0 and later + - Terraform 1.2.0 and later +- You must use a supported resource type for the run task to validate referenced artifacts. Refer to [Supported resource types for the HCP Terraform run task](/hcp/docs/packer/reference/run-task) reference for information about supported types. + +## Manual validation + +When the `hcp_packer_artifact` data source references a revoked artifact or an artifact that is scheduled to be revoked, the `revoke_at` attribute is set to the revocation timestamp. + +You can define a Sentinel policy that checks for the `revoke_at` attribute to validate Terraform configurations for revoked artifacts. Refer to [Defining Sentinel Policies](/terraform/cloud-docs/policy-enforcement/sentinel) in the HCP Terraform documentation for instructions. + +In the following example, a Terraform configuration only provisions an EC2 instance if the data source returns a version that is not revoked. + +```hcl hideClipboard +resource "aws_instance" "app_server" { + ami = data.hcp_packer_artifact.ubuntu_us_east_2.external_identifier + instance_type = "t2.micro" + tags = { + Name = "Learn-HCP-Packer" + } + + lifecycle { + precondition { + condition = try( + formatdate("YYYYMMDDhhmmss", data.hcp_packer_artifact.ubuntu_us_east_2.revoke_at) > formatdate("YYYYMMDDhhmmss", timestamp()), + data.hcp_packer_artifact.ubuntu_us_east_2.revoke_at == "" + ) + error_message = "Source AMI is revoked." + } + } +} +``` + +## Automatic validation + +The HCP Terraform run task for HCP Packer directs HCP Packer to check for references to revoked artifacts in your Terraform configuration during Terraform operations. The run task fails if it detects resources that reference revoked artifacts. + +When a run task fails, HCP Packer stops the Terraform run if the run task's enforcement mode is set to `Mandatory`. The run proceeds with a warning if the mode is set to `Advisory`. Terraform also prints information about the run task operation to the console. The amount of detail depends on your HCP Packer tier. + +HCP Terraform Free Edition includes one run task that you can associate with up to ten workspaces. Refer to [Packer pricing](https://www.hashicorp.com/products/packer/pricing) for details. + +> **Hands on**: Complete the following tutorials for guidance on how to set up and test the HCP Terraform run task integration: + - [Identify compromised artifacts with HCP Terraform](/packer/tutorials/hcp/run-tasks-data-source-image-validation) + - [Set Up HCP Terraform Run Task for HCP Packer](/packer/tutorials/hcp/setup-tfc-run-task) + +### Set up the HCP Terraform run task for HCP Packer + +1. Open the HCP Packer homepage click **Integrate with HCP Terraform**. +1. When prompted, copy the values in the **Endpoint URL** and **HMAC Key** fields. These values are required to create the run task in HCP Terraform. +1. Complete the instructions described in the HCP Terraform documentation for [creating a run task](/terraform/cloud-docs/workspaces/settings/run-tasks#creating-a-run-task) and [associating run tasks with a workspace](/terraform/cloud-docs/workspaces/settings/run-tasks#associating-run-tasks-with-a-workspace). + +### Review run task output + +Run the Terraform configuration associated with the workspace containing the run task. Refer to the [HCP Terraform documentation](/terraform/cloud-docs/run/manage) for details. + +After each run, you can click **Details** to open the HCP Packer registry home page if you need to make changes to versions or channels. + +The details about the run task vary depending on your HCP Packer edition. + +#### Essentials edition run task + +For Essentials edition registries, the run task scans resources for artifacts retrieved by the `hcp_packer_artifact` data source. + +The run task scans all the resources in the plan and only validates resources that reference HCP Packer data sources. The run task fails when any new or replaced resources reference a revoked version. HCP Packer stops the Terraform run if the run task's enforcement mode is set to `Mandatory`. The run proceeds with a warning if the mode is set to `Advisory`. + +Terraform also prints the following information about the run task operation to the console: + +- The number of resources scanned. +- The number of resources referencing revoked versions. +- Whether a more recent version is available in HCP Packer. Use this information to generate new versions for revoked artifacts as necessary, as well as update the channels accordingly. +- The number of resources referencing versions that are scheduled to be revoked. + +#### Standard edition run task + +For Standard edition registries, the run task performs the following types of validation: + +- _Data source_ artifact validation: The run task scans planned resources that reference artifacts through the HCP Packer data source. +- _Resource_ artifact validation: The run task scans planned resources that use hard-coded machine artifact IDs. Refer to [Supported resource types for the HCP Terraform run task](/hcp/docs/packer/reference/run-task) for a list of resources that the run task can validate. + +The run task scans all the resources known so far in the plan. For each resource, the run task checks for an artifact associated with a version in HCP Packer. The run task fails when any new or replaced resources reference a revoked version. HCP Packer stops the Terraform run if the run task's enforcement mode is set to `Mandatory`. The run proceeds with a warning if the mode is set to `Advisory`. + +HCP Terraform will also display a structured list of resources with the status of each resource, and its associated matched HCP Packer artifact. + diff --git a/content/hcp-docs/content/docs/vagrant/index.mdx b/content/hcp-docs/content/docs/vagrant/index.mdx new file mode 100644 index 0000000000..8fee5b0c62 --- /dev/null +++ b/content/hcp-docs/content/docs/vagrant/index.mdx @@ -0,0 +1,62 @@ +--- +layout: docs +page_title: What is HCP Vagrant Registry? +description: The HCP Vagrant Registry is a public, searchable index of Vagrant boxes that allows box owners to publish and share their Vagrant boxes. +--- + +# What is HCP Vagrant Registry? + +HCP Vagrant Registry, also referred to as HCP Vagrant, is a public, searchable index of Vagrant boxes that lets box owners publish and share their Vagrant boxes. + +Boxes are the package format for HashiCorp Vagrant environments. You can use a box to bring up an identical working environment on any Vagrant-supported platform. Refer to the [Vagrant boxes](/vagrant/docs/boxes) documentation for more information. + +## How does HCP Vagrant work? + +HCP Vagrant stores and serves metadata associated with Vagrant Boxes in the form of Vagrant [`.box`](https://developer.hashicorp.com/vagrant/docs/boxes/format) files which contain all the required information for a provider to launch a Vagrant machine. + +The two primary workflows for HCP Vagrant are [box discovery](#box-discovery) and [box creation & versioning](#box-creation-and-versioning). + +### Box discovery + +The **Discover Public Boxes** page lets you find and filter publicly available Vagrant boxes by supported providers and architectures. These are boxes created by both HashiCorp and community contributions. You can find an owner of a box by selecting their username in the URL or clicking on their username in the top left corner of the box page. + +Public Vagrant boxes are a popular way to set a base development environment launchable in a single command within an organization or community. + +### Box creation and versioning + +HCP Vagrant lets members of your HCP organization create new versions of boxes in your registry. Versioning boxes in HCP Vagrant lets you update their content, implement fixes, and communicate the changes you make. + +## HCP Vagrant and Vagrant Cloud + +In December 2022, Vagrant Cloud integrated with HCP Vagrant, delegating new user creation and account management to HCP at that time. + +Vagrant Cloud users have the opportunity to migrate their Vagrant Cloud organizations to HCP Vagrant in advance of the site-wide migration of Vagrant Cloud to HCP Vagrant. Refer to the [migrate to HCP Vagrant Registry](https://developer.hashicorp.com/vagrant/vagrant-cloud/hcp-vagrant/migration-guide) documentation for more information. + +The Vagrant team has started to migrate all existing boxes to HCP Vagrant and retire Vagrant Cloud. If you have not migrated your boxes, we will contact you at the email address associated with your primary Vagrant Cloud organization with instructions on how to claim your boxes in HCP Vagrant. + +Migrated organizations will be permanently accessible at their original Vagrant Cloud URLs and won’t require changes to user workflows. Migrated registries will have access to the modern HCP Vagrant UI, an improved search experience, and free private boxes. + +## Changes from Vagrant Cloud + +HCP Vagrant inherits HCP’s resource sharing and access control model and does not support Vagrant Cloud Collaborators. + +An HCP Vagrant registry, and any private boxes within that registry, are visible to any HCP user with access to the parent HCP organization. + +Since HCP Vagrant does not currently support box or registry level access restrictions, paid Vagrant Cloud users who migrate to HCP Vagrant will no longer be charged for their private boxes. + +| | Vagrant Cloud | HCP Vagrant Registry | +| -------------- | ------------- | -------------------- | +| Box Discovery | ✓ | ✓ | +| Box Creation | ✓ | ✓ | +| Box Versioning | ✓ | ✓ | +| Authorization | Internal | HPC RBAC | + +For information on the permissions each HCP RBAC role grants to the HCP Vagrant Registry, refer to the [configure HCP Vagrant permissions](/hcp/docs/vagrant/permissions) documentation. + +## Tiers + +HCP Vagrant has a single, standard (free) tier. + +## Community + +Please submit questions, suggestions, and requests to [Hashicorp Discuss](https://discuss.hashicorp.com/c/vagrant/24). \ No newline at end of file diff --git a/content/hcp-docs/content/docs/vagrant/permissions.mdx b/content/hcp-docs/content/docs/vagrant/permissions.mdx new file mode 100644 index 0000000000..1553a76a1d --- /dev/null +++ b/content/hcp-docs/content/docs/vagrant/permissions.mdx @@ -0,0 +1,35 @@ +--- +page_title: Configure HCP Vagrant permissions +description: |- + Permissions table for HCP Vagrant. +--- + +# Configure HCP Vagrant permissions + +HCP user accounts inherit permissions based on their roles at either the +[organization](/hcp/docs/hcp/admin/orgs) or +[project](/hcp/docs/hcp/admin/projects) level. + +There are three HCP roles that map to HCP Vagrant permissions: Viewer, Contributor, and Admin. If an HCP IAM user is an HCP admin, you must add them to an HCP IAM group that has at least one HCP role and belongs to the project HCP Vagrant is configured in. + +The following table lists HCP Vagrant permissions associated with the three HCP IAM roles. + +The following table lists HCP Vagrant permissions based on the HashiCorp basic roles: + +| HCP Vagrant permissions | Viewer | Contributor | Admin | +| --------------------------------- | ----------- | ---------------| ---------- | +| View private boxes | ✅ | ✅ | ✅ | +| Create a new box | ❌ | ✅ | ✅ | +| Edit an existing box | ❌ | ✅ | ✅ | +| Delete a box | ❌ | ✅ | ✅ | +| Create a new version | ❌ | ✅ | ✅ | +| Release a new version | ❌ | ✅ | ✅ | +| Edit an existing version | ❌ | ✅ | ✅ | +| Delete a version | ❌ | ✅ | ✅ | +| Revoke a version | ❌ | ✅ | ✅ | +| Create registry | ❌ | ❌ | ✅ | +| Manage registry | ❌ | ❌ | ✅ | +| Delete registry | ❌ | ❌ | ✅ | + +For information on managing user permissions, refer to the [manage users](https://developer.hashicorp.com/hcp/docs/hcp/iam/users#manage-users) documentation. + diff --git a/content/hcp-docs/content/docs/vagrant/reclaim-vagrant-cloud.mdx b/content/hcp-docs/content/docs/vagrant/reclaim-vagrant-cloud.mdx new file mode 100644 index 0000000000..5ba0a2c15e --- /dev/null +++ b/content/hcp-docs/content/docs/vagrant/reclaim-vagrant-cloud.mdx @@ -0,0 +1,51 @@ +--- +page_title: Reclaiming Auto Migrated Organizations +description: "How to claim Vagrant Cloud organizations that have been auto migrated to HCP Vagrant Registry" +--- + +# Reclaim registries from Vagrant Cloud + +This topic explains how to reclaim your Vagrant Cloud registries in HashiCorp Cloud Platform (HCP) Vagrant Registry that we're migrated by HashiCorp. + +HCP Vagrant launched in early spring 2024 as the successor to Vagrant Cloud. As the next step of the transition, HashiCorp is automatically migrating existing Vagrant Cloud boxes to HCP Vagrant. While auto-migrated boxes remain available at their current URLs, organization owners must claim their registries to continue managing and releasing boxes. + +## Reclamation process overview + +The migration process follows these major steps: + +1. Organization owners receive a notification email after HashiCorp migrates their organization. +1. Users sign into HCP and view their unclaimed registries. +1. Users claim their registries by selecting them and assigning them to an HCP project. +1. After claiming registries, users can manage them through the HCP UI. + +## Prerequisites + +Before starting the migration process, you will need an HCP account that uses the same email associated with your Vagrant Cloud account. The email associated with your Vagrant Cloud account may be different from your Vagrant Cloud login email. You can visit `Profile` in the Vagrant Cloud settings to verify your primary email address. + +## Check migration status + +When HashiCorp migrates their Vagrant Cloud organization, all users with owner level access will receive a notification on how to reclaim your auto migrated registry. If you have not received a notification, check your spam folder for the email. + +Migrated organizations are not available from **Settings** in Vagrant Cloud. If an organization is no longer visible in Vagrant Cloud settings, HashiCorp has likely auto-migrated the organization to HCP Vagrant Registry. + +## Access your unclaimed registries + +You need to link your Vagrant Cloud and HCP Vagrant accounts before you can view and reclaim your Vagrant Cloud registries. If you have not linked your Vagrant Cloud account, click **Link & reclaim** and sign into Vagrant Cloud to link your accounts. + +![Link Vagrant Cloud and HCP Vagrant accounts](/img/docs/vagrant/link-accounts.png) + +Navigate to **Vagrant Registry**, then **Unclaimed Registries** in the sidebar to view the auto migrated Vagrant Cloud organizations associated with your account. + +## Claim your registries + +After you link Vagrant Cloud and HCP accounts, the **Unclaimed Registries** page will have a list of auto migrated registries. This process will only take a few moments but some users may need to refresh the **Unclaimed Registries** page. + +Select the checkboxes for registries you want to claim, then choose an HCP Project you want to migrate the registries to. Click the **Reclaim** button. + +Once you have reclaimed your Vagrant Cloud registries, go to your Registries page to interact with your migrated registries. + +## Troubleshooting + +If your accounts are linked and you still have no claimable registries, you can confirm that your Vagrant Cloud organization has been migrated. Check your **Organizations** tab in Vagrant Cloud to ensure your organizations are not still in Vagrant Cloud. + +If you are unable to access your transferred registries, or if you have linked your accounts and there are organizations missing from both Vagrant Cloud and your Unclaimed Registries page, [contact support](/vagrant/intro/support) for assistance. diff --git a/content/hcp-docs/content/docs/vault-radar/agent/correlate-vault.mdx b/content/hcp-docs/content/docs/vault-radar/agent/correlate-vault.mdx new file mode 100644 index 0000000000..ecbb624542 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/agent/correlate-vault.mdx @@ -0,0 +1,153 @@ +--- +page_title: Correlate findings with Vault Enterprise +description: >- + Correlate findings from HCP Vault Radar with secrets stored in HCP Vault Dedicated or Vault Enterprise. +--- + +# Correlate findings with Vault + + + +Correlation between HCP Vault Radar and HashiCorp Vault is supported on HCP +Vault Dedicated or Vault Enterprise clusters. + + + +When the HCP Vault Radar agent connects to a HCP Vault Dedicated or Vault Enterprise cluster, +Vault Radar can correlate findings with secrets stored in Vault. This allows you to identify +what secrets you need to rotate. + +## Connect a Vault cluster + +Before you can correlate findings with Vault, you need to [deploy the Radar +agent](/hcp/docs/vault-radar/agent/deploy). Once you deploy the agent, you can +configured and connect Vault to the agent. + +### Create a Vault policy + +Vault Radar requires the following capabilities: +- Validate tokens (using self-lookup API) +- List and read all namespaces +- List all auth methods and mounts in each namespace +- List all secrets in a KV secrets engine mount +- Read all the versions of a secret in a KV secret engine mount + +A policy granting just the required level of access requires explicitly specifying the namespaces and KV mounts. + +```hcl +path "auth/token/lookup-self" { + capabilities = ["read"] +} + +path "sys/license/status" { + capabilities = ["read"] +} + +# Assumption: Namespaces are atmost 2 levels deep +path "sys/namespaces/*" { + capabilities = ["read", "list"] +} + +path "+/sys/namespaces/*" { + capabilities = ["read", "list"] +} + +path "+/+/sys/namespaces/*" { + capabilities = ["read", "list"] +} + +path "sys/auth" { + capabilities = ["read"] +} + +path "+/sys/auth" { + capabilities = ["read"] +} + +path "+/+/sys/auth" { + capabilities = ["read"] +} + +path "sys/mounts" { + capabilities = ["read"] +} + +path "+/sys/mounts" { + capabilities = ["read"] +} + +path "+/+/sys/mounts" { + capabilities = ["read"] +} + +# Assumption: KV secret engine mounts are atmost 2 levels deep +path "+/metadata/*" { + capabilities = ["read", "list"] +} + +path "+/+/metadata/*" { + capabilities = ["read", "list"] +} + +path "+/+/+/metadata/*" { + capabilities = ["read", "list"] +} + +path "+/+/+/+/metadata/*" { + capabilities = ["read", "list"] +} + +path "+/data/*" { + capabilities = ["read"] +} + +path "+/+/data/*" { + capabilities = ["read"] +} + +path "+/+/+/data/*" { + capabilities = ["read"] +} + +path "+/+/+/+/data/*" { + capabilities = ["read"] +} +``` + +For less restrictive environments, you can give broader permissions to Vault +Radar. + +A simple policy that grants Vault Radar broad access to your Vault cluster. + +```hcl +path "*" { + capabilities = ["read", "list"] +} +``` + +### Agent configuration with Vault + +Set up and manage a Vault cluster from the Vault Radar module in the [HCP Portal](https://portal.cloud.hashicorp.com/). Select **Settings**, then **Secret Managers**, and then click **Connect new secret manager**. +![Select a index source to scan](/img/docs/vault-radar/agent/agent-index-source-selection.png) + +1. Select Vault and the Vault deployment type +1. Provide you Vault cluster URL +1. Select auth method and fill in details on the form, and select Next to validate the connection. + + + + +@include 'vault-radar/indexing/kubernetes-auth.mdx' + + + + +@include 'vault-radar/indexing/app-role-auth.mdx' + + + + +@include 'vault-radar/indexing/token-auth.mdx' + + + diff --git a/content/hcp-docs/content/docs/vault-radar/agent/deploy.mdx b/content/hcp-docs/content/docs/vault-radar/agent/deploy.mdx new file mode 100644 index 0000000000..4ef4cafd27 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/agent/deploy.mdx @@ -0,0 +1,136 @@ +--- +page_title: Agent deploy +description: >- + Deploy the Vault Radar agent on various platforms. +--- + +# Deploy Vault Radar agent + +The HCP Vault Radar agent allows you to scan on-premises data sources for +secrets that are not accessible by the cloud scanner, and enable correlation +between secrets found by Vault Radar that are store in Vault Enterprise. + +## Prerequisites + +- Access to HCP with an account that can create service principals +- Vault Radar CLI installed +- The agent requires the following environment variables to run and connect to HCP: + + - `HCP_CLIENT_ID` + - `HCP_CLIENT_SECRET` + - `HCP_PROJECT_ID` + - `HCP_RADAR_AGENT_POOL_ID` + +## Create a service principal + +Project-level service principals interact with resources within +a specific project in an organization. By default, they can only access +resources in the same project. However, you can assign roles to service +principals projects beyond their original scope. + +1. Create a [project level service principal](/hcp/docs/hcp/iam/service-principal#project-level-service-principals-1) + with the **Vault Radar Agent** role assigned. + +1. Generate a [key](/hcp/docs/hcp/iam/service-principal#generate-a-service-principal-key) + for the service principal. + +1. Export an environment variable for the client ID and client secret. + + ```shell-session + $ export HCP_CLIENT_ID=actual-client-id HCP_CLIENT_SECRET=actual-client-secret + ``` + +## Create an agent pool in the HCP Portal + +An agent pool is a group of agents that share the same +`HCP_RADAR_AGENT_POOL_ID`, enabling higher throughput by horizontal scaling the +number of agents. + +1. Navigate to [the HCP Portal](https://portal.cloud.hashicorp.com/) navigate to HCP Vault Radar. + +1. Click **Settings**. + +1. Click **Agent**. + +1. Click **Add an agent pool** and follow the prompts to create a new agent + pool. + + The final page displays the required configuration information. You can + retrieve this information later from the **Agent** section of the settings page. + +1. From the **Connect the agent** page, copy the commands to create environment + variables for the pool ID and project ID. + + **Example:** + + + + ```shell-session + $ export HCP_RADAR_AGENT_POOL_ID=actual-pool-id + export HCP_PROJECT_ID=actual-project-id + ``` + + + +### Configure secret values + +For most data sources the agent is going to need credentials to authenticate +with the data source itself. When configuring your data source on HCP, you may +be prompted to define a credential needed for the integration to work. + + + +The agent is expecting a URI. The only resource supported is an +environment variable. + +**Example:** + +```shell-session +$ env://ENV_VARIABLE_NAME +``` + + + +If you are configuring a GitHub data source, you are going to need to generate a +GitHub PAT for the agent. Save the value of that PAT local to the +agent as an environment variable. If you saved the environment variable as +`VAULT_RADAR_GIT_TOKEN` then the URI for that variable entered on HCP should be +`env://VAULT_RADAR_GIT_TOKEN`. + +### Additional configuration + +- The agent will respect configurations set by an `.hashicorp/vault-radar/ignore.yaml`. + See [Custom ignore rules](/hcp/docs/vault-radar/cli/configuration/write-custom-ignore-rules) + +- The agent will respect the log level set by the `VAULT_RADAR_LOG_LEVEL` environment + variable. See [supported log levels](/hcp/docs/vault-radar/cli#log-level). + +## Connect a data source + +You can set up a data source and manage it from the Vault Radar module in the +[HCP Portal](https://portal.cloud.hashicorp.com/). + +1. Click **Settings**. + +1. Click **Data Sources**. + +1. Click **Add data source**. + +1. Select agent scan. + ![Select agent scan](/img/docs/vault-radar/agent/onboard-agent-data-source.png) + +1. Select the type of data source you want to set up. Provide the information prompted by the data source's form. + ![Select a data source to scan](/img/docs/vault-radar/agent/agent-data-source-selection.png) + + + + +@include 'vault-radar/deploy-agent-using-k8s.mdx' + + + + +@include 'vault-radar/run-agent-local.mdx' + + + diff --git a/content/hcp-docs/content/docs/vault-radar/agent/overview.mdx b/content/hcp-docs/content/docs/vault-radar/agent/overview.mdx new file mode 100644 index 0000000000..2f99e81d70 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/agent/overview.mdx @@ -0,0 +1,21 @@ +--- +page_title: Agent overview +description: >- + Scan for secrets in on-premises data sources and correlate findings from HCP Vault Radar with secrets stored in HCP Vault or Vault Enterprise. +--- + +# Vault Radar agent overview + +The Vault Radar agent allows you to host Vault Radar scanning using your own +deployment strategies. Configure and manage the agent using the Vault Radar +module in the [HCP Portal](https://portal.cloud.hashicorp.com/). + +The agent performs all scanning once configured. This can be useful if you are +security conscious about where you store and scan your content, or if you +have resources that are not publicly accessible by HCP. + +The Vault Radar agent is part of the +[`vault-radar`](/hcp/docs/vault-radar/cli#download-and-install-cli) CLI. + +You can [deploy the agent](/hcp/docs/vault-radar/manage/agent/deploy) locally or +as a Kubernetes pod using Helm. \ No newline at end of file diff --git a/content/hcp-docs/content/docs/vault-radar/cli/changelog.mdx b/content/hcp-docs/content/docs/vault-radar/cli/changelog.mdx new file mode 100644 index 0000000000..b8c744cc60 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/changelog.mdx @@ -0,0 +1,203 @@ +--- +page_title: Changelog +sidebar_title: Changelog +description: |- + Vault Radar CLI Changelog. +--- + +# Changelog + +Keep track of changes to the Vault Radar CLI + +### 0.33.0 +- Minor bug fixes and performance improvements + +### 0.32.0 +- Improvements to the Jira integration +- Minor bug fixes and performance improvements + +### 0.31.0 +- Added support for Jira's attachment-added webhook +- Support for detecting: + - Proxmox API token +- Minor bug fixes and performance improvements + +### 0.30.0 +- Improvements to Slack metering and handling of various bots +- Support for scanning Jira attachments +- Support for detecting: + - Databricks tokens and secrets + - OCI tokens +- Minor bug fixes and performance improvements + +### 0.29.0 +- Added support for detecting: + - Sentry.io client secrets + - Snowflake JWT tokens + - Scaleway secret key +- Minor bug fixes and performance improvements + +### 0.28.0 + +- Added support for detecting: + - Sentry.io Organization and Personal Tokens + - Vercel access tokens +- Added metering support for Slack scanning +- Minor bug fixes and performance improvements + +### 0.27.0 + +- Support for detecting OpenAI API Key Types +- Additional context added for passwords if possible +- Minor bug fixes and performance improvements + +### 0.26.0 +- Minor bug fixes and performance improvements + +### 0.25.0 +- Support for HCP Connected CI scanning +- Minor bug fixes and performance improvements + +### 0.24.0 +- Minor bug fixes and performance improvements + +### 0.23.0 +- Fixed bug where metadata risks were reported as historical risks + +### 0.22.0 +- Scans now include additional context around detected database passwords if possible +- Support for restricting repo and inline rules at the admin level + - Please reach out to your HashiCorp support contact for more information and help enabling the feature +- Improvements to the Agent logs +- Fixed a bug around Events being generated with a file path instead of a content id +- Performance improvements and minor bug fixes + +### 0.21.1 +- Bug fix around event confidences + +### 0.21.0 +- Bug fixes and performance improvements +- Improvements to reduce false positives for API keys + +### 0.20.0 +- Improvements to the Agent logs +- Minor bug fixes and performance improvements + +### 0.19.0 +- Support for detecting new token patterns: + - Auth0 Tokens + - Azure ARM Tokens + - Azure Refresh Tokens + - Azure DevOps Personal Access Tokens +- CLI Scan Commands send metering information to HCP for the following data sources: + - Git + - Confluence + - Jira +- Minor performance improvements + +### 0.18.0 +- Agent sends metering information to HCP +- Support detection of PagerDuty and Boundary Tokens +- Minor bug fixes + +### 0.17.0 +- Improvements to the Agent logs +- The scan file command no longer requires TTY when --disable-ui is set +- Minor bug fixes and performance improvements + +### 0.16.0 +- Vault Radar Agent indexing and correlation +- Vault Radar Agent support for Confluence data sources +- Vault Radar CLI Git pre-receive hook scanning +- Fixed a bug where scan commands fail to fetch secret hasher key from HCP + +### 0.15.0 +- Performance improvements + +### 0.14.0 +- Agent + - New Agent role for Service Principals supported +- CLI + - New CLI role for Service Principals supported + + +### 0.13.0 +- Scanning performance improvements +- New patterns added: + - CloudFlare API tokens + - DigitalOcean tokens +- Vault Radar agent is released as a beta feature. [Check out the documentation for more details](/hcp/docs/vault-radar/manage/agent/overview). + +### 0.12.0 +- Fix AWS Secrets Manager secret ARN false positive +- New detected risk type: + - Google OAuth Refresh Token + - Google OAuth Client Secret +- Jira scanning now includes issue summary +- Fixed bug where Git token was required to scan a locally cloned repository + +### 0.11.0 +- **Breaking Changes:** + - This version introduces a breaking change for users that rely on `--offline` flag for their command usage. A new Vault Radar License will need to be generated and configured locally for the command to continue to work in offline mode. Please reach out to your Hashicorp customer success team to generate a new license. +- Jira user metering + +### 0.10.0 +- `scan ci pr` command uploads metering data to HCP +- Upload Jira scan results to HCP +- Add or update patterns for: + - Stripe token + - Tencent WeChat API app id + - Telegram bot token + - Facebook access token + +### 0.9.0 +** Note: Usage of `GITHUB_TOKEN` as a default ENV variable in the `vault-radar` binary consumed was removed in this release.** + +- Fetch JIRA issue and comment author email address +- Dynamic PAT support for scan_repo command +- Add Okta API token, Salesforce access token, CircleCI API token +- Add the ability to skip activeness checks +- Update Heroku API key / OAuth token pattern +- CI PR scanning supports scanning individual commits +- CI PR scanning command + +### 0.8.0 +- Tip of branch scanning for CI + +### 0.7.1 + +- Value hashing improvements for formatted JIRA content +- Improvements to detecting false-positives for XML content +- Bug fixes + - Improve archive files error handling + +### 0.7.0 + +- Scan archives and compressed files +- Support ignore.yaml in git repositories +- Validate whether a command is enabled when run in online mode +- Ability to read license from a file +- Bug fixes + - Fix an issue where secrets with two slashes are not being reported + +### 0.6.0 + +- New commands + - Add command to meter Git users (GitHub, GitLab and Bitbucket) +- Bug fixes + - Install git in the docker image + - Ability to run Station in a kubernetes cluster + +### 0.5.0 + +- Add Windows support +- Add licensing support in offline mode +- Add brew, RPM, and DEB packaging +- New commands + - Add command to meter Confluence users +- Confluence scanning changes + - Fix Confluence history scanning on Windows + - Use user email as author instead of account id/display name +- Bug fixes + - Fix impossible error logs for TFE variable scan + - Better error handling for Station scans diff --git a/content/hcp-docs/content/docs/vault-radar/cli/configuration/aws-authentication.mdx b/content/hcp-docs/content/docs/vault-radar/cli/configuration/aws-authentication.mdx new file mode 100644 index 0000000000..a944b7da84 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/configuration/aws-authentication.mdx @@ -0,0 +1,53 @@ +--- +page_title: AWS Authentication for Vault Radar CLI +description: |- + Learn how to setup Vault Radar CLI to authenticate with AWS. +--- + +# AWS Authentication + +The AWS related commands need credentials in order to authenticate to AWS. +There are multiple ways to configure authentication, similar to how +`aws` cli authentication is setup. + +### Credentials in environment + +Set `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` variables in environment. +For example, + +```shell-session +$ export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE +$ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY +``` + +Note: If you are using temporary credentials using STS, `AWS_SESSION_TOKEN` also needs to be set. + +Refer: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html + +### Named profile in credentials file + +`vault-radar` can read credentials from the AWS Credentials file, usually located at `$HOME/.aws/config`. +By default, it looks for a profile with name `default`. However any other profile in the credentials file can be +chosen by setting `AWS_PROFILE` environment variable + +Refer: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-using-profiles + +### EC2 Instance profile + +When running `vault-radar` on an AWS EC2 instance, we can assign an IAM role for the EC2 instance and grant it +access to the resource that needs to be scanned. + +These credentials will be available to the code running on the instance through the Amazon EC2 metadata service. + +Refer: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html + + +### ECS & EKS + +It is also possible to run `vault-radar` as a container in AWS ECS (Elastic Container Service) or AWS EKS (Elastic Kubernetes Service). +We need to create an IAM role with a policy granting the access to the resource that needs to be scanned as assign it appropriately. +For ECS, we need to assign the IAM role to the task. +For EKS, we need to create an IAM OIDC provider for the cluster and assign the role to a service account. + +Refer: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html +Refer: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html diff --git a/content/hcp-docs/content/docs/vault-radar/cli/configuration/combine-multiple-scan-results.mdx b/content/hcp-docs/content/docs/vault-radar/cli/configuration/combine-multiple-scan-results.mdx new file mode 100644 index 0000000000..9a6c143ead --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/configuration/combine-multiple-scan-results.mdx @@ -0,0 +1,92 @@ +--- +page_title: Combine multiple scan results for Vault Radar CLI +description: |- + Learn how to combine multiple scanned results into a single file. +--- + +# Combine multiple scan results + +It might be useful to create a single file with results from multiple different +scans. The combined file can be used to look at all scan results in a single +view, or can be passed to other commands as well. + +How to combine the results depends on the format of the resulted file. + +## JSON + +If the output file format is JSON, combining multiple results is simple. As the +format is a json-lines, each line is completely independent, and you can +concatenate all the result files: + +```shell-session +$ cat results-1.jsonl results-2.jsonl > combined-results.jsonl +``` + +#### Example + +Scan two different vault clusters and create a combined index. See +[index vault](/hcp/docs/vault-radar/cli/index/vault#index-generation) for +information about creating an index. + +1. Scan a Vault cluster and store the result in `vault-index-1.jsonl`. + + ```shell-session + $ VAULT_ADDR=[ADDR1] VAULT_TOKEN=[TOKEN1] vault-radar index vault \ + -o vault-index-1.jsonl --index + ``` + +1. Scan another Vault cluster and store the result in `vault-index-2.jsonl`. + + ```shell-session + $ VAULT_ADDR=[ADDR2] VAULT_TOKEN=[TOKEN2] vault-radar index vault \ + -o vault-index-2.jsonl --index + ``` + +1. Combine the two scan results. + + ```shell-session + $ cat vault-index-1.jsonl vault-index-2.jsonl > combined-vault-index.jsonl + ``` + +## CSV + +Each CSV file has a header line, so just combining files will not properly work. +To combine a combination of `head` and `tail` commands can be used. + +```shell-session +$ head -n 1 result-1.csv > combined-results.csv && \ + tail -n+2 -q results-1.csv results-2.csv >> combined-results.csv +``` + +#### Example + +Scan two git repositories and create a combined file. + +1. Scan a git repository and store the result in `scan-repo-results-1.csv`. + + ```shell-session + $ vault-radar scan repo -u -o scan-repo-results-1.csv + ``` + +1. Scan another git repository and store the result in + `scan-repo-results-12.csv`. + + ```shell-session + $ vault-radar scan repo -u -o scan-repo-results-2.csv + ``` + +1. Combine the two scan results. + + ```shell-session + $ head -n 1 scan-repo-results-1.csv > combined-results.csv && \ + tail -n+2 -q scan-repo-results-1.csv scan-repo-results-2.csv >> combined-results.csv + ``` + + + +As a best practice, combine CSV results form the same command. For example, +combine the returned results from the `scan repo` command. Combining results +from different commands, (e.g. `scan repo` and `scan folder`) will not properly +work as different commands have different columns. + + diff --git a/content/hcp-docs/content/docs/vault-radar/cli/configuration/create-custom-risk-types.mdx b/content/hcp-docs/content/docs/vault-radar/cli/configuration/create-custom-risk-types.mdx new file mode 100644 index 0000000000..8b8ab5a534 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/configuration/create-custom-risk-types.mdx @@ -0,0 +1,69 @@ +--- +page_title: Create custom risk types +description: |- + Learn how to create custom risk types that Vault Radar CLI will recognize. +--- + +# Create custom risk types + +You can define a custom risk type that CLI will recognize. It can be a secret +(e.g. an API token), a PII (Personal Identifying Information), or NIL +(Non-Inclusive Language). + +## File format + +Custom risk type is defined in an YAML file. + +#### Example + +The following file detects GitLab PAT token: + +```yaml +regex: + value: glpat-[a-zA-Z0-9\-_]{20} +type: gitlab_personal_access_token +category: secret +description: GitLab personal access token +precedence: strong_pattern +``` + +### Field descriptions + +| Field | Description | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| value | specifies a regular expression to match the risk. Vault Radar supports golang stype regular expressions as well as PCRE | +| type | Unique identifier for the risk type. While there are no restrictions on the actual value, the best practice is to keep it to lower-case letters and underscore only | +| category | Risk category. Must be one of `secret`, `pii`, or `nil` | +| description | Human friendly description of the risk type. | +| precedence | This is internal to Vault Radar, use `strong_pattern` for all custom risk types. | + + +## Location + +CLI loads `.yaml` files from `$HOME/.hashicorp/vault-radar/custom_patterns` folder. + +## Examples + +Here are examples of custom risk definitions. + +**Non-Inclusive Language:** + +```yaml +regex: + value: (?i)whitelist +type: nil_whitelist +category: nil +description: Non-inclusive Language - Whitelist +precedence: strong_pattern +``` + +**PII:** + +```yaml +regex: + value: \b((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\b +type: pii_ipv4 +category: pii +description: PII - IPv4 +precedence: strong_pattern +``` diff --git a/content/hcp-docs/content/docs/vault-radar/cli/configuration/csv-output-definition.mdx b/content/hcp-docs/content/docs/vault-radar/cli/configuration/csv-output-definition.mdx new file mode 100644 index 0000000000..c264c354e6 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/configuration/csv-output-definition.mdx @@ -0,0 +1,55 @@ +--- +page_title: CSV output definitions +description: |- + Definitions of the various fields that will be present in the CSV output. +--- + +# CSV output definitions + +This page aims to explain and define the various fields that will be present in the CSV output. + +## Field descriptions + +The following are field definitions that can be present in any CSV output: + +| Field name | Description | +| -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Category | This is the type of risk found by Vault Radar. Eg. secret, PII, or NIL. | +| Description | This is a short human readable description or explanation of the risk. | +| Created At | This is the time the risk was created or introduced. | +| Author | This is the user associated with creating or introducing the risk. | +| Severity | This is a classification of the risk. Critical risks are things Vault Radar believe are the most deserving of user's immediate attention, followed by High , Medium , and Info . | +| Is Historic | This means the risk was both, first created in version that precedes the most recent version and that the risk is not present in the most recent version of the content. | +| Deep Link | This is a link the content where the risk was found. | +| Value Hash | This is a hash of the secret value itself. This is NOT the value of the secret. Identical hashes, means the secret values are identical. | +| Fingerprint | This is a value that is used to distinguish different risk events and incorporates time and location into the value's generation. This is a value that is useful when depending on the output as part of some integration or automation. | +| Textual Context | This is sometimes populated when Vault Radar identifies a secret value within some text. It can be helpful when trying to find a secret in a page or if there multiple secrets on a page. | +| Activeness | Vault Radar will attempt to determine if a secret is active or inactive. In the cases where Vault Radar can definitively say a secret is active or inactive the column will populated. In all other cases the column will not be populated to indicate that the status is unknown. | +| Tags | These are human readable context tags that may provide some additional information about a risk. | +| Managed Location | This is populated only when scanning with an index file. When the column is populated, that means a secret that is currently in the managed store, was also found in whatever was being scanned. The value will be the location of the secret in the managed store. | +| Managed Location Is Latest | This is populated only when scanning with an index file. When this column is true, it means the secret that was found is the current version of the secret in the secret manager. | +| Total Managed Locations | This is populated only when scanning with an index file. This is the number of times a particular risk was found in secrets manager | + +## Data source specific fields + +The following are field definitions for fields that will be present when +scanning specific data sources. + +### GitHub + +| Field name | Description | +| ------------- | -------------------------------------------------------------- | +| Git Reference | This is the git reference value where the risk was introduced. | + +### AWS Parameter Store + +| Field name | Description | +| -------------- | -------------------------------------------------------------------------------------------------- | +| Version | This is version of the parameter where the risk was introduced. | +| AWS Account ID | This is the account id associated with the version of the parameter where the risk was introduced. | + +### Amazon S3 + +| Field name | Description | +| -------------- | -------------------------------------------------------------------------------------------------- | +| AWS Account ID | This is the account id associated with the version of the parameter where the risk was introduced. | diff --git a/content/hcp-docs/content/docs/vault-radar/cli/configuration/upload-results-to-hcp.mdx b/content/hcp-docs/content/docs/vault-radar/cli/configuration/upload-results-to-hcp.mdx new file mode 100644 index 0000000000..e914786eb1 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/configuration/upload-results-to-hcp.mdx @@ -0,0 +1,46 @@ +--- +page_title: Upload scan results to HCP +description: |- + Learn how to upload the scan results returned by the CLI to HCP. +--- + +# HCP upload + +For commands that HCP upload is enabled, you will need to configure +your environment to complete the upload process. + +## Create service principals + +Log into the HCP web portal and follow the [Project service +principals documentation](/hcp/docs/hcp/iam/service-principal#project-level-service-principals-1) +to create a service principal. + +Service principals require the `Vault Radar CLI User` role in the `Vault Radar` service. Refer to the [HCP +IAM documentation](/hcp/docs/hcp/iam/users#project-role) for more information +on HCP roles. + +Service principal credentials should not be shared. + + + +Be sure to securely store the client secret after creation as you will be unable to retrieve +it later. + + + +## Prepare your environment + +You will need to set the following environment variables for your CLI runtime +referencing the newly created Service Principals. + +```shell-session +$ export HCP_PROJECT_ID= +$ export HCP_CLIENT_ID= +$ export HCP_CLIENT_SECRET= +``` + +### Currently supported scan data sources + +- [`scan confluence`](/hcp/docs/vault-radar/cli/scan/confluence) +- [`scan jira`](/hcp/docs/vault-radar/cli/scan/jira) +- [`scan repo`](/hcp/docs/vault-radar/cli/scan/repo) diff --git a/content/hcp-docs/content/docs/vault-radar/cli/configuration/write-custom-ignore-rules.mdx b/content/hcp-docs/content/docs/vault-radar/cli/configuration/write-custom-ignore-rules.mdx new file mode 100644 index 0000000000..41011444c4 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/configuration/write-custom-ignore-rules.mdx @@ -0,0 +1,44 @@ +--- +page_title: Custom ignore rules +description: |- + Learn how to set custom ignore rules to reduce noise in the scan result. +--- + +# Ignore rules + +You can configure to skip detected risks based on various parameters. To do +that, create a yaml file, see the example below, and put it into path where +Vault Radar will be able to find. + +## Where to create the file + +Create a `$HOME/.hashicorp/vault-radar/ignore.yaml` file or relative to your working repository root `.hashicorp/vault-radar/ignore.yaml`. + +### Example + +```yaml +# Ignore by file path +- paths: + - "**/*_test.go" + - cli/cmd/default-nil-config.yaml + - cli/cmd/data/* + +# Ignore by secret value +# Equivalent to 'secret_value == my_password OR secret_value == my_token' +- secret_values: + - my_password + - my_token + +# Ignore by secret type +# Equivalent to 'secret_type == password_assignment OR secret_type == secret_assignment' +- secret_types: [password_assignment, secret_assignment] +``` + +### Field descriptions + +| Field | Description | +| ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `paths` | To skip risks found in particular **files**, add the rule to `paths` section. Each entry can be a concrete file path or a glob mask. | +| `secret_values` | To skip particular **values**, add the rule to `secret_values` section. Each entry is a regex, if the risk value matches the regex, it will be ignored. | +| `secret_types` | To skip particular **types**, add the rule to `secret_types` section. Each entry is a regex, if the risk value matches the regex, it will be ignored. | + diff --git a/content/hcp-docs/content/docs/vault-radar/cli/govern/vault.mdx b/content/hcp-docs/content/docs/vault-radar/cli/govern/vault.mdx new file mode 100644 index 0000000000..4ae8908828 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/govern/vault.mdx @@ -0,0 +1,111 @@ +--- +page_title: govern vault command +description: |- + The govern vault command is used for scanning a HashiCorp Vault Community Edition or Enterprise cluster. +--- + +# govern vault + +@include 'beta-feature.mdx' + +@include 'vault-radar/version-requirement.mdx' + +The `govern vault` command is used for scanning a HashiCorp Vault Community +Edition or Enterprise cluster. Scanning Vault using `govern vault` provides +compliance reporting and secrets indexing capabilities. + +## Authentication and authorization + +`vault-radar` requires the `VAULT_ADDR` and `VAULT_TOKEN` environment variables +in order to connect to your Vault cluster. The `govern vault` command will +traverse the full namespace hierarchy. Within each namespace, an attempt to scan +every AppRole and KVv2 mount will be made. Access can be limited via the +policies attached to the token provided. `vault-radar` will attempt to use the +following endpoints: + +| Mount | Resource | Method | Endpoint | +|---------|----------------------------|--------|------------| +| System | List namespaces | `LIST` | [sys/namespaces](/vault/api-docs/system/namespaces#list-namespaces) | +| System | List auth methods | `GET` | [sys/auth](/vault/api-docs/system/auth#list-auth-methods) | +| System | List secret mounts | `GET` | [sys/mounts](/vault/api-docs/system/mounts#list-mounted-secrets-engines) | +| AppRole | List roles | `LIST` | [auth/:mount-path/role](/vault/api-docs/auth/approle#list-roles) | +| AppRole | List secret ID accessors | `LIST` | [auth/:mount-path/role/:role_name/secret-id](/vault/api-docs/auth/approle#list-secret-id-accessors) | +| AppRole | Lookup secret ID accessors | `POST` | [auth/:mount-path/role/:role_name/secret-id-accessor/lookup](/vault/api-docs/auth/approle#read-approle-secret-id-accessor) | +| KVv2 | Read KV engine config | `GET` | [:mount-path/config](/vault/api-docs/secret/kv/kv-v2#read-kv-engine-configuration) | +| Kvv2 | List secrets | `LIST` | [:mount-path/metadata/:path](/vault/api-docs/secret/kv/kv-v2#list-secrets) | +| KVv2 | Read secret metadata | `GET` | [/:mount-path/metadata/:path](/vault/api-docs/secret/kv/kv-v2#read-secret-metadata) | +| KVv2 | Read secret version | `GET` | [/:mount-path/data/:path?version=:version-number](/vault/api-docs/secret/kv/kv-v2#read-secret-version) | + + +## Compliance reporting + +The `govern vault` command can be used to detect secrets that need to be rotated +to meet organization compliance requirements. The output will contain entries +for AppRole and KVv2 secrets. AppRole secrets will have an entry per secret ID +accessor. KVv2 secrets will have an entry per secret sub-key, per version, per +KVv2 secret. For example, there will be 6 entries for a secret with 3 versions +that contains an AWS access key ID and secret key. KVv2 secret entries will +include a hashed version of the secret value. We will not have access to AppRole +secret ID values as those are only provided upon creation. Secret hashes can be +used as a mechanism to detect that a secret has been sprawled within Vault +across multiple entries, mounts, or namespaces. + +The following command will scan Vault providing CSV output with a rotation +period of 90 days: + +```shell-session +$ vault-radar govern vault --outfile vault-scan.csv --rotation-period=90 +``` + +Secrets with an age greater than 90 days will have `medium` severity and provide +a warning of `Secret rotation period has been exceeded`. Warnings will also be +provided if a secret is close to exceeding its expiry period (provided by +`-expiry-period`) or if its number of uses (currently specific to AppRole) is +close to 0. Entries with multiple warnings will have a severity of `high`. + +The breadth of a scan can be limited and filtered in multiple ways. Most simply, +the `-limit` flag can be used to specify a maximum number of secrets to scan. +There are no ordering guarantees with this flag, however, so it is mostly useful +to capture a quick subset of data as a means to fully understand the output's +structure. The output can also be filtered more directly using one or more of +the following flags: + +- `-namespace`: Only secrets within this namespace will be present in this + namespace. Example: `vault-radar govern vault -namespace=ns1 ...` +- `-mount-type`: Only secrets within mounts of this type will be present in the + output. This flag is namespace and path agnostic. Example: `vault-radar + govern vault -mount-type=approle ...` +- `-mount-path:` Only secrets within mounts of this _relative_ path will be + present in the output. This flag is namespace and mount-type agnostic. + Example: `vault-radar govern vault -mount-path=app1 ...` + +### HCP connection scanning behavior + +The govern commands require an HCP cloud connection to ensure +that hashes are generated using a shared salt from the +cloud keeping consistency across scans. In order to populate the HCP connection +information needed, refer to the [HCP +upload](/hcp/docs/vault-radar/cli/configuration/upload-results-to-hcp) page. + +### HCP Vault Dedicated considerations + +In a HCP Vault Dedicated cluster, the root namespace is restricted and the users will only +have access to `admin` namespace and all the child namespaces within it. For +`vault-radar govern vault` to work against a HCP Vault Dedicated cluster, it is mandatory to set +`VAULT_PARENT_NAMESPACE` environment variable to the namespace that needs to be +scanned. + +```shell-session +$ export VAULT_PARENT_NAMESPACE=admin +$ vault-radar govern vault --outfile govern-results.csv --rotation-period=90 +``` + +The above command will scan all the namespaces within the `admin`, including the +`admin` namespace. + + + +The `VAULT_PARENT_NAMESPACE` will also work on Vault Enterprise but it is not +mandatory to set. + + diff --git a/content/hcp-docs/content/docs/vault-radar/cli/index.mdx b/content/hcp-docs/content/docs/vault-radar/cli/index.mdx new file mode 100644 index 0000000000..1ca033366e --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/index.mdx @@ -0,0 +1,505 @@ +--- +page_title: Vault Radar CLI +description: |- + Learn how to use Vault Radar CLI to scan files to detect unmanaged secrets. +--- + +# Vault Radar CLI + +In addition to [the HCP Portal](https://portal.cloud.hashicorp.com/), Vault Radar offers an easy to use command-line +interface (CLI) to scan a various data source for unmanaged secrets to reduce +security vulnerability. + +@include 'vault-radar/version-requirement.mdx' + +## Download and install CLI + + + +Contact your customer success manager to enable HCP Vault Radar or for a license to run the +CLI in offline mode. + + + +The HCP Vault Radar CLI is available for download from +[releases.hashicorp.com/vault-radar](https://releases.hashicorp.com/vault-radar/) +as a zip archive and from popular package managers. It is also available as an image +in [Docker Hub](https://hub.docker.com/r/hashicorp/vault-radar/). + + + + +To install the HCP Vault Radar CLI, find the appropriate [package for your +system](https://releases.hashicorp.com/vault-radar/) and download it. The +`vault-radar` CLI is packaged as a zip archive. + +After downloading the zip archive, unzip the package. The HCP Vault Radar +binary runs as a single binary named `vault-radar`. Any other files in the +package can be safely removed and `vault-radar` will still function. + +The final step is to make sure that the `vault-radar` binary is available on the +PATH. See this page for instructions on setting the [PATH on Linux and +Mac](https://support.apple.com/guide/terminal/use-environment-variables-apd382cc5fa-4f58-4449-b20a-41c53c006f8f/mac) +This page contains instructions for setting the [PATH on +Windows](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_environment_variables?view=powershell-7.2). + + + + +The installation steps provided were tested on macOS 14.4 and should work on +other versions that utilize the Homebrew package manager. + +Refer to the [Homebrew installation +instructions](https://docs.brew.sh/Installation) if it is not already installed. + +1. Install the HahsiCorp tap. + + ```shell-session + $ brew tap hashicorp/tap + ``` + +1. Install the HCP Vault Radar CLI. + + ```shell-session + $ brew install vault-radar + ``` + +1. Verify the installation. + + ```shell-session + $ vault-radar --help + + Usage: vault-radar [--version] [--help] [] + + Available commands are: + govern Govern commands + index Index commands + meter Meter commands + scan Scan commands + station Station management + version Shows the vault-radar cli version and golang version + ``` + + + + + + + +The installation steps provided were tested on Ubuntu 22.04 and should work on +other Debian based distributions that utilize the apt package manager. + +1. Update the apt repository. + + ```shell-session + $ sudo apt update + ``` + +1. Download the HashiCorp GPG key. + + ```shell-session + $ curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg + ``` + +1. Add the HashiCorp repo. + + ```shell-session + $ echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list + ``` + +1. Install the `vault-radar` CLI. + + ```shell-session + $ sudo apt update && sudo apt install vault-radar -y + ``` + +1. Verify the installation. + + ```shell-session + $ vault-radar --help + + Usage: vault-radar [--version] [--help] [] + + Available commands are: + govern Govern commands + index Index commands + meter Meter commands + scan Scan commands + station Station management + version Shows the vault-radar cli version and golang version + ``` + + + + +The installation steps provided were tested on CentOS Stream 9 and should work +on other distributions that utilize the yum package manager. + +1. Update the yum repository. + + ```shell-session + $ sudo yum update + ``` + +1. Add the HashiCorp repository. + + + + Refer to the [official + packaging](https://www.hashicorp.com/official-packaging-guide) guide for the + correct RHEL/CentOS or Fedora configuration. + + + + ```shell-session + $ curl -fsSL https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo | sudo tee /etc/yum.repos.d/hashicorp.repo + ``` + +1. Install the `vault-radar` CLI. + + ```shell-session + $ sudo yum update && sudo yum install vault-radar -y + ``` + +1. Verify the installation. + + ```shell-session + $ vault-radar --help + + Usage: vault-radar [--version] [--help] [] + + Available commands are: + govern Govern commands + index Index commands + meter Meter commands + scan Scan commands + station Station management + version Shows the vault-radar cli version and golang version + ``` + + + + +The installation steps provided were tested on Fedora 39 using the dnf package +manager. + +1. Update the dnf repository. + + ```shell-session + $ sudo dnf update + ``` + +1. Add the HashiCorp repository. + + + + Refer to the [official + packaging](https://www.hashicorp.com/official-packaging-guide) guide for the + correct RHEL/CentOS or Fedora configuration. + + + + ```shell-session + $ sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo + ``` + +1. Install the `vault-radar` CLI. + + ```shell-session + $ sudo dnf update && sudo dnf -y install vault-radar + ``` + +1. Verify the installation. + + ```shell-session + $ vault-radar --help + + Usage: vault-radar [--version] [--help] [] + + Available commands are: + govern Govern commands + index Index commands + meter Meter commands + scan Scan commands + station Station management + version Shows the vault-radar cli version and golang version + ``` + + + + +The installation steps provided were tested on Amazon Linux 2 and should work on +other distributions that utilize the yum package manager. + +1. Update the yum repository. + + ```shell-session + $ sudo yum update + ``` + +1. Add the HashiCorp repository. + + + + Refer to the [official + packaging](https://www.hashicorp.com/official-packaging-guide) guide for the + correct RHEL/CentOS or Fedora configuration. + + + + ```shell-session + $ curl -fsSL https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo | sudo tee /etc/yum.repos.d/hashicorp.repo + ``` + +1. Install the `vault-radar` CLI. + + ```shell-session + $ sudo yum update && sudo yum install vault-radar -y + +1. Verify the installation. + + ```shell-session + $ vault-radar --help + + Usage: vault-radar [--version] [--help] [] + + Available commands are: + govern Govern commands + index Index commands + meter Meter commands + scan Scan commands + station Station management + version Shows the vault-radar cli version and golang version + ``` + + + + + + + +HashiCorp does not maintain installation binaries using Chocolatey or Scoop. The +latest version of the HCP Vault Radar CLI is available by manual installation. + +This example downloads the Windows AMD64 binary using PowerShell. Use a Windows +account with appropriate permissions to extract the binary to the `Program Files` +directory and update the `PATH`. + +1. Launch a PowerShell terminal using **Run as Administrator**. If prompted, click + **Yes**. + +1. Verify the latest (or desired) version of the HCP Vault Radar binary from + [releases.hashicorp.com](https://releases.hashicorp.com/vault-radar/). For example, "0.6.0". + +1. Create some environment variables to configure the download: + ```shell-session + $Env:RadarVersion = "0.6.0" + $Env:RadarArch = $Env:PROCESSOR_ARCHITECTURE.ToLower() + $Env:RadarURL = "https://releases.hashicorp.com/vault-radar/${Env:RadarVersion}" + $Env:RadarFile = "vault-radar_${Env:RadarVersion}_windows_${Env:RadarArch}.zip" + ``` + +1. Download the zip archive to the current working directory. + + ```shell-session + $ Invoke-WebRequest ` + -URI "${Env:RadarURL}/${Env:RadarFile}" ` + -OutFile "./${Env:RadarFile}" + ``` + +1. Extract the archive file to the default program directory. + + ```shell-session + $ Expand-Archive ` + -Path ".\${Env:RadarFile}" ` + -Destination "${Env:Programfiles}\HashiCorp\radar-cli\bin\" + ``` + +1. Update the Windows user path: + + ```shell-session + $ [System.Environment]::SetEnvironmentVariable("Path", + "$Env:Path;${Env:Programfiles}\HashiCorp\radar-cli\bin", + [System.EnvironmentVariableTarget]::User + ) + ``` + +1. Close the session and re-open Powershell normally. + +1. Verify the installation. + + ```shell-session + $ vault-radar --help + + Usage: vault-radar [--version] [--help] [] + + Available commands are: + govern Govern commands + index Index commands + meter Meter commands + scan Scan commands + station Station management + version Shows the vault-radar cli version and golang version + ``` + + + + +The HCP Vault Radar CLI is also available as a Docker image. This image +can be used to run the CLI in a containerized environment. + +1. Verify the latest (or desired) version of the HCP Vault Radar binary from + [Docker Hub](https://hub.docker.com/r/hashicorp/vault-radar/tags). For example, "0.6.0". + +1. Set the version as an environment variable. + + ```shell-session + $ export VAULT_RADAR_VERSION=0.6.0 + ``` + +1. Pull the Docker image. + + ```shell-session + $ docker pull hashicorp/vault-radar:${VAULT_RADAR_VERSION} + ``` + +1. Run the Docker container. + + ```shell-session + $ docker run --rm hashicorp/vault-radar:${VAULT_RADAR_VERSION} vault-radar --help + ``` + + + + +## Dependencies + +The Vault Radar CLI requires access to the following URLs: + +- api.cloud.hashicorp.com +- auth.idp.hashicorp.com + +Configure the necessary rules within your network to ensure the CLI can +access these URLs. + +The following dependencies need to be installed on the machine `vault-radar` is +running on. + +- [git](https://git-scm.com/downloads) - Required for `scan repo` and `scan confluence` commands +- [Docker engine](https://docs.docker.com/engine/install/) - Required for + `scan docker-image` command. + +## Usage + + + +```shell-session +Usage: vault-radar [--version] [--help] [] + +Available commands are: + agent Agent management + govern Govern commands + index Index commands + install Install commands + meter Meter commands + scan Scan commands + version Shows the vault-radar cli version and golang version +``` + + + +Some commands require a connection to HCP. You will need to set `HCP_PROJECT_ID`, `HCP_CLIENT_ID`, and `HCP_CLIENT_SECRET` from your HCP project. More info on generating service principal keys [here](/hcp/docs/vault-radar/cli/configuration/upload-results-to-hcp). + +For more information, examples, and usage about a command, click on the name +of the command in the sidebar. + +## Command help + +To view a list of the available commands at any time, just run `vault-radar` +with no arguments: + +```shell-session +$ vault-radar +``` + +Use `help` (or `-h` for shorthand) to see the specific command help output. + +**Example:** See the help message for the `vault-radar scan aws-parameter-store` +command usage. + +```shell-session +$ vault-radar scan aws-parameter-store -h +Usage: vault-radar scan aws-parameter-store [options] + + Scans AWS Parameter Store + +Options: + + --region, -r Specifies the region of AWS Parameter Store to scan (required) + --outfile , -o Specifies the file to store information about found secrets (required) + --skip-history If specified, scans only the most recent version of the parameters. Default is to scan all available versions + --format, -f Specifies the output format, csv, json, and sarif are supported. Defaults to csv + --index-file Specifies the index file path to use in order to determine which risks are managed + --baseline, -b Specifies the file with previous scan results. Only new secrets will be reported. + --limit, -l Specifies the maximum number of secrets to be reported. The scan will stop when the limit is reached + --parameter-limit Specifies the maximum number of parameters to be scanned. The scan will stop when the limit is reached + --disable-ui Specifies that the scan summary should not be logged to stdout +``` + +## Global flags and environment variables + +### Disable UI + +The `--disable-ui` flag disables logging the command status and summary to stdout. +This is particularly useful when you want to run a command in an environment where TTY +is not available like a CI/CD pipeline. + +**Example:** Run the `scan aws-parameter-store` command without logging the summary to stdout. + +```shell-session +$ vault-radar scan aws-parameter-store --region us-west-2 --outfile secrets.csv --disable-ui +``` + +### Index file + +All the scan commands support the `--index-file` flag to specify the output file generated by +the `index` command. When this flag is specified, the scan command uses the index file to determine +which secrets are managed (i.e the secret is also detected in a secrets manager such as HashiCorp Vault). + +**Example:** Run the `scan aws-parameter-store` command with the index file. + +```shell-session +$ vault-radar scan aws-parameter-store --region us-west-2 \ + --outfile secrets.csv --index-file index.jsonl +``` + +See [How to generate a Vault Index](/hcp/docs/vault-radar/cli/index/vault#index-generation) + +### Risks limit + +All the scan commands support the `--limit` flag to specify the maximum number of secrets to be reported. +The scan will stop when the limit is reached. + +**Example:** Run the `scan aws-parameter-store` command with the limit. + +```shell-session +$ vault-radar scan aws-parameter-store --region us-west-2 \ + --outfile secrets.csv --limit 10 +``` + +### Log level + +The log level for the CLI can be configured using the `VAULT_RADAR_LOG_LEVEL` environment variable. +The default log level is `info`. The supported log levels are: + +- `trace` +- `debug` +- `info` +- `warn` +- `error` + +**Example:** Set the log level to `debug`. + +```shell-session +$ export VAULT_RADAR_LOG_LEVEL=debug +``` diff --git a/content/hcp-docs/content/docs/vault-radar/cli/index/aws-parameter-store.mdx b/content/hcp-docs/content/docs/vault-radar/cli/index/aws-parameter-store.mdx new file mode 100644 index 0000000000..7ab22de8c1 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/index/aws-parameter-store.mdx @@ -0,0 +1,96 @@ +--- +page_title: index aws-parameter-store command +description: |- + The index aws-parameter-store command is used for indexing parameters of type SecureString in AWS Parameter Store. +--- + +# index aws-parameter-store + +@include 'beta-feature.mdx' + +@include 'vault-radar/version-requirement.mdx' + +The `index aws-parameter-store` command is used for creating an index of secure strings in AWS Parameter Store. + + + +Only parameters of type `SecureString` are indexed as they are secure by definition. + + + +## Authentication + +The `index aws-parameter-store` command needs permissions to read the parameter, +its history and tags, see the following simplified policy document. + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "ssm:DescribeParameters", + "ssm:GetParameterHistory", + "ssm:GetParameter", + "ssm:GetParameters", + "ssm:ListTagsForResource" + ], + "Resource": "*" + } + ] +} +``` + +See [AWS Authentication](/hcp/docs/vault-radar/cli/configuration/aws-authentication) for +more information on how to authenticate with AWS. + +## Usage + + + +```plaintext +Usage: vault-radar index aws-parameter-store [options] +``` + + + +### Command options + +- `--region, -r`: Specifies the region of AWS Parameter Store to scan (required) +- `--outfile , -o`: Specifies the file to store information about found secrets (required) +- `--disable-ui`: Specifies that the scan summary should not be logged to stdout + +### HCP connection indexing behavior + +Index commands require an HCP cloud connection to scan. This is to ensure that hashes +are generated using a shared salt from the cloud keeping consistency across indexes. +In order to populate the HCP connection information needed, refer to the +[HCP upload](/hcp/docs/vault-radar/cli/configuration/upload-results-to-hcp) page. + +### Generate an index file + +Index files are generated in an "online mode", meaning that the +secret hash produced is using a salt that is provided from HCP. This requires +the Project Service Principals to be configured for your system as outlined by +the [HCP upload](/hcp/docs/vault-radar/cli/configuration/upload-results-to-hcp) +page. To generate an index file using the `SecureString` parameters + +```shell-session +$ vault-radar index aws-parameter-store \ + -r \ + -o .jsonl +``` + +### Consuming an index file + +To consume the resulting index file use the `index-file` flag when calling a +scan command. E.g. + +```shell-session +$ vault-radar scan aws-s3 \ + --bucket \ + -r \ + -o .csv \ + --index-file +``` diff --git a/content/hcp-docs/content/docs/vault-radar/cli/index/vault.mdx b/content/hcp-docs/content/docs/vault-radar/cli/index/vault.mdx new file mode 100644 index 0000000000..456318e3b6 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/index/vault.mdx @@ -0,0 +1,108 @@ +--- +page_title: index vault command +description: |- + The index vault command is used for scanning a HashiCorp Vault Community Edition or Enterprise cluster. +--- + +# index vault + +@include 'beta-feature.mdx' + +@include 'vault-radar/version-requirement.mdx' + +The `index vault` command is used for creating an index of secrets from Vault. + +## Authentication and authorization + +`vault-radar` requires the `VAULT_ADDR` and `VAULT_TOKEN` environment variables +in order to connect to your Vault cluster. The `index vault` command will +traverse the full namespace hierarchy. Within each namespace, an attempt to scan +every AppRole and KVv2 mount will be made. Access can be limited via the +policies attached to the token provided. `vault-radar` will attempt to use the +following endpoints: + +| Mount | Resource | Method | Endpoint | +|---------|----------------------------|--------|------------| +| System | List namespaces | `LIST` | [sys/namespaces](/vault/api-docs/system/namespaces#list-namespaces) | +| System | List auth methods | `GET` | [sys/auth](/vault/api-docs/system/auth#list-auth-methods) | +| System | List secret mounts | `GET` | [sys/mounts](/vault/api-docs/system/mounts#list-mounted-secrets-engines) | +| AppRole | List roles | `LIST` | [auth/:mount-path/role](/vault/api-docs/auth/approle#list-roles) | +| AppRole | List secret ID accessors | `LIST` | [auth/:mount-path/role/:role_name/secret-id](/vault/api-docs/auth/approle#list-secret-id-accessors) | +| AppRole | Lookup secret ID accessors | `POST` | [auth/:mount-path/role/:role_name/secret-id-accessor/lookup](/vault/api-docs/auth/approle#read-approle-secret-id-accessor) | +| KVv2 | Read KV engine config | `GET` | [:mount-path/config](/vault/api-docs/secret/kv/kv-v2#read-kv-engine-configuration) | +| Kvv2 | List secrets | `LIST` | [:mount-path/metadata/:path](/vault/api-docs/secret/kv/kv-v2#list-secrets) | +| KVv2 | Read secret metadata | `GET` | [/:mount-path/metadata/:path](/vault/api-docs/secret/kv/kv-v2#read-secret-metadata) | +| KVv2 | Read secret version | `GET` | [/:mount-path/data/:path?version=:version-number](/vault/api-docs/secret/kv/kv-v2#read-secret-version) | + + +## Index generation + +You can generate an index of KVv2 secrets from Vault using the `index vault` command: + +```shell-session +$ vault-radar index vault --outfile +``` + +The index output will be JSONL-formatted. There will be an entry per secret +sub-key, per version, per KVv2 secret. For example, there will be 6 entries for +a secret with 3 versions that contains an AWS access key ID and secret key. + +Each index entry will contain the following fields: + +- `value_hash` - The hashed version of the secret value +- `secret_key` - The underlying sub-key within the Vault secret (e.g. + `aws_secret_key`) +- `secret_type` - The of type of secret determined by its key and/or underlying + value (e.g.GitHub personal access token, AWS secret key) +- `secret_age_days` - The time elapsed in days since creation +- `location` - A full URL that can be used to retrieve the secret (e.g. + `vault://127.0.0.1:8205/v1/eng/team1/app-foo/data/aws?version=1`) + +The index can then be used to compare against in other scans. For example, the +following command can be used to run a Confluence scan using a generated Vault +index file: + +```shell-session +$ vault-radar scan confluence --outfile=confluence.csv \ + --url="http://localhost:8090" \ + --space-key=VRD --index-file=vault.idx +``` + +An in-memory index keyed off of secret hashes will be generated prior to +scanning the source. This index will be used to annotate whether a risk exists +in Vault. + +### Generate index file using HCP provided salt + +Index files are generated in an "online mode", meaning that the +secret hash produced is using a salt that is provided from HCP. This requires +the Project Service Principals to be configured for your system as outlined by +the [HCP upload](/hcp/docs/vault-radar/cli/configuration/upload-results-to-hcp) +page. + +```shell-session +$ vault-radar index vault -r -o .jsonl +``` + +### HCP Vault Dedicated considerations + +In a HCP Vault Dedicated cluster, the root namespace is restricted and the users will only +have access to `admin` namespace and all the child namespaces within it. For +`vault-radar index vault` to work against a HCP Vault Dedicated cluster, it is mandatory to set +`VAULT_PARENT_NAMESPACE` environment variable to the namespace that needs to be +scanned. + +```shell-session +$ export VAULT_PARENT_NAMESPACE=admin +$ vault-radar index vault --outfile index-results.jsonl +``` + +The above command will scan all the namespaces within the `admin`, including the +`admin` namespace. + + + +The `VAULT_PARENT_NAMESPACE` will also work on Vault Enterprise but it is not +mandatory to set. + + diff --git a/content/hcp-docs/content/docs/vault-radar/cli/install/git.mdx b/content/hcp-docs/content/docs/vault-radar/cli/install/git.mdx new file mode 100644 index 0000000000..06e4303e3c --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/install/git.mdx @@ -0,0 +1,65 @@ +--- +page_title: install git overview +description: |- + An overview of vault-radar install git sub-commands. +--- + +# install git pre-commit-hook + +The `install git pre-commit-hook` is a simple way to setup [a git pre commit hook](https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks) that will run a `vault-radar` scan on any commit. + +To manually install and invoke `vault-radar` as a pre-commit step adding the following command to an existing pre-commit script will result in `vault-radar` performing a scan prior to any commit. + + + +```plaintext +vault-radar scan git pre-commit +``` + + +Note: The suggested approach is to use the installation command. Using the installation command should add the command to the existing pre-commit script leaving existing configuration untouched. + +## Authentication +This command requires a valid `vault-radar` license. [How to configure a license](/hcp/docs/vault-radar/cli/index#Offline-mode). Please reach out to your customer support contact for help generating a license. + +## Usage +Run the following from within a repository you want want the pre commit hook installed on. + + + +```plaintext +vault-radar install git pre-commit-hook +``` + + +When making your next commit, the pre commit hook that was just installed is configured to run a scan of the diff. If the scan detects risks with a severity at or above the configured threshold, the commit will be rejected. + + +### Remediation Options +Here are some options to handle identified risks that should be allowed and are preventing a developer from performing a commit. + +* [Custom Ignore Rules](/hcp/docs/vault-radar/cli/configuration/write-custom-ignore-rules) +* [Inline Ignore Rules](/hcp/docs/vault-radar/concepts/write-inline-ignore-rules) + +## Configuration +The scan that happens during the pre-commit hook is setup to look for configuration in one of two places. +1. Root of a repository managed by git: `./.hashicorp/vault-radar/config.json` +2. Or in your user `HOME` directory to apply the configuration globally: `~/.hashicorp/vault-radar/config.json` + +Note: The local verson of the configuration will have precedence over the global version if defined in both locations. + +### Sample `config.json` + + + +```json +{ + "fail_severity": "high" +} +``` + + + +* `fail_severity` - This defines a fail threshold for vault-radar. When a risk is identified that has a severity at or beyond the configured fail-severity, `vault-radar` will consider the scan a failure. See here for more information on [severity](/hcp/docs/vault-radar/concepts/severity) and the different levels. + +**Note: If this configuration value is not defined the default behavior is to not enforce any severity. As a result all risks identified will be allowed.** \ No newline at end of file diff --git a/content/hcp-docs/content/docs/vault-radar/cli/meter/confluence.mdx b/content/hcp-docs/content/docs/vault-radar/cli/meter/confluence.mdx new file mode 100644 index 0000000000..4c5ef1f2f0 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/meter/confluence.mdx @@ -0,0 +1,59 @@ +--- +page_title: meter confluence command +description: |- + The meter confluence command is used for evaluating the costs associated with scanning a self-managed Confluence instance (Confluence Server or Confluence Data Center) or Atlassian Confluence Cloud instance. +--- + +# meter confluence +@include 'vault-radar/version-requirement.mdx' + +The `meter confluence` provides information on number of contributors in your data source. Your account teams will use this information for license costs. + +## Authentication + +`meter confluence` command uses the same [authentication](/hcp/docs/vault-radar/cli/scan/confluence#authentication) as `scan confluence` command. +Keep in mind that the credentials must have access to all the spaces that should be evaluated. + +## Usage + +### users + +`meter confluence users` is going to provide a usage estimate based on the number of users detected while performing a scan of a Confluence data source. + +### Command Options + +- `--url, -u`: The url endpoint of the Confluence server to meter (required). +- `--outfile, -o`: Specifies the file to store information about found users (required). + +The `days` parameter or both `start-time` and `end-time` are required. +- `--days, -d`: Specifies the number of days to evaluate for metering. +- `--start-time`: Specifies the start date to evaluate metering. Setting `--end-time` is also required. Accepts `YYYY-MM-DD`. +- `--end-time`: Specifies the end date to evaluate metering. Setting `--start-time` is also required. Accepts `YYYY-MM-DD`. + + +- `--spaces`: Comma separated list of space keys to evaluate. If the parameter is not present, all spaces will be evaluated. +- `--skip-personal`: Adding this flag instructs the command to not evaluate Confluence Personal spaces that are part of the Organization. + +### Meter Confluence for the last 10 days + +```shell-session +$ vault-radar meter confluence users -u -o .csv --days 10 +``` + +### Meter a Confluence Organization without personal spaces + +```shell-session +$ vault-radar meter confluence users -u -o .csv --days 10 --skip-personal +``` + +### Meter specific Confluence spaces + +```shell-session +$ vault-radar meter confluence users -u -o .csv --days 10 --spaces , +``` + +### Meter Confluence from some date until another date + +```shell-session +$ vault-radar meter confluence users -u -o .csv --start-time --end-time +``` diff --git a/content/hcp-docs/content/docs/vault-radar/cli/meter/git.mdx b/content/hcp-docs/content/docs/vault-radar/cli/meter/git.mdx new file mode 100644 index 0000000000..8925065b82 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/meter/git.mdx @@ -0,0 +1,62 @@ +--- +page_title: meter git command +description: |- + The meter git command is used for evaluating the costs associated with scanning a git data source like GitHub. +--- + +# meter git + +The `meter git` provides information on number of contributers in your data source. Your account teams will use this information for license costs. + +## Authentication +The `meter git` command uses the same [authentication](/hcp/docs/vault-radar/cli/scan/repo#authentication) as `scan repo` command. +It is important to remember that the credentials used must have access to the resources that are to be evaluated. + +Additionally if you do not specify which orgs to scan by using the `--orgs` flag, then the credentials used must have permissions to enumerate orgs. + +## Usage +### users +`meter git users` is going to provide a usage estimate based on the number of users detected while performing a scan of a Git data source. + +### Command Options + +- `--type, -t`: The type of the git data source (required). Supported values: 'github_cloud', 'github_enterprise', 'gitlab_cloud', 'gitlab_onprem','bitbucket_cloud', 'bitbucket_server'. +- `--url, -u`: The url endpoint of the git data source to meter (required for non-cloud data sources) +- `--outfile, -o`: Specifies the file to store information about found users (required). + +The `days` parameter or both `start-time` and `end-time` are required. +- `--days, -d`: Specifies the number of days to evaluate for metering. +- `--start-time`: Specifies the start date to evaluate metering. Setting `--end-time` is also required. Accepts `YYYY-MM-DD`. +- `--end-time`: Specifies the end date to evaluate metering. Setting `--start-time` is also required. Accepts `YYYY-MM-DD`. + +- `--orgs`: A comma separated list of orgs used to collect metering info. If not specified, the command will try to meter all orgs the credentials have access to. As a result if this flag is not used, the credentials must have permissions to enumerate orgs. + +### Meter Github Cloud Orgs + +```shell-session +$ vault-radar meter git users -t 'github_cloud' -o .csv +``` + +### Meter Github Server Orgs + +```shell-session +$ vault-radar meter git users -t 'github_enterprise' -u -o .csv +``` + +### Meter specific Github Orgs + +```shell-session +$ vault-radar meter git users -t 'github_cloud' -o .csv --orgs , +``` + +### Meter Github for the last 10 days + +```shell-session +$ vault-radar meter git users -t 'github_cloud' -o .csv --days 10 +``` + +### Meter Github from a some date until another date + +```shell-session +$ vault-radar meter git users -t 'github_cloud' -o .csv --start-time --end-time +``` diff --git a/content/hcp-docs/content/docs/vault-radar/cli/meter/jira.mdx b/content/hcp-docs/content/docs/vault-radar/cli/meter/jira.mdx new file mode 100644 index 0000000000..f7b20195a1 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/meter/jira.mdx @@ -0,0 +1,57 @@ +--- +page_title: meter jira command +description: |- + The meter jira command is used for evaluating the costs associated with scanning an Atlassian Jira Cloud instance or a self-managed Jira Server. +--- + +# meter jira + + +You must have version 0.11.0 or higher of the Vault Radar CLI installed. + +To check the current version of your CLI, use the [version](/hcp/docs/vault-radar/cli/version) command. + + + +The `meter jira` provides information on number of contributors in your data source. Your account teams will use this information for license costs. + +## Authentication + +`meter jira` command uses the same [authentication](/hcp/docs/vault-radar/cli/scan/jira#authentication) as the `scan jira` command. +Keep in mind that the credentials must have access to all the projects that should be evaluated. + +## Usage + +### users + +`meter jira users` is going to provide a usage estimate based on the number of users detected while performing a scan of a Jira data source. + +### Command Options + +- `--url, -u`: The url endpoint of the Jira instance to meter (required). +- `--outfile, -o`: Specifies the file to store information about found users (required). +- `--projects`: Specifies comma-separated list of project keys used to collect metering info. If not set, all projects are metered. + +The `days` parameter or both `start-time` and `end-time` are required. + +- `--days, -d`: Specifies the number of days to evaluate for metering. +- `--start-time`: Specifies the start date to evaluate metering. Setting `--end-time` is also required. Accepts `YYYY-MM-DD`. +- `--end-time`: Specifies the end date to evaluate metering. Setting `--start-time` is also required. Accepts `YYYY-MM-DD`. + +### Meter Jira for the last 10 days + +```shell-session +$ vault-radar meter jira users -u -o .csv --days 10 +``` + +### Meter specific projects in Jira + +```shell-session +$ vault-radar meter jira users -u -o .csv --days 10 --project , +``` + +### Meter Jira projects from some date until another date + +```shell-session +$ vault-radar meter jira users -u -o .csv --start-time --end-time +``` diff --git a/content/hcp-docs/content/docs/vault-radar/cli/meter/slack.mdx b/content/hcp-docs/content/docs/vault-radar/cli/meter/slack.mdx new file mode 100644 index 0000000000..3a9a23f910 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/meter/slack.mdx @@ -0,0 +1,56 @@ +--- +page_title: meter slack command +description: |- + The meter slack command is used for evaluating the costs associated with scanning a Slack workspace or channel(s). +--- + +# meter slack + + +You must have version 0.28.0 or higher of the Vault Radar CLI installed. + +To check the current version of your CLI, use the [version](/hcp/docs/vault-radar/cli/version) command. + + + +The `meter slack` provides information on number of contributors in your data source. Your account teams will use this information for license costs. + +## Authentication + +`meter slack` command uses the same [authentication](/hcp/docs/vault-radar/cli/scan/slack#authentication) as the `scan slack` command. +Keep in mind that the credentials must have access to all the projects that should be evaluated. + +## Usage + +### users + +`meter slack users` is going to provide a usage estimate based on the number of users detected while performing a scan of a Slack workspace or channel(s). + +### Command Options + +- `--outfile, -o`: Specifies the file to store information about found users (required). +- `--channels, -c`: Specifies the comma-separated list of channels to collect metering from. + +The `days` parameter or both `start-time` and `end-time` are required. + +- `--days, -d`: Specifies the number of days to evaluate for metering. +- `--start-time`: Specifies the start date to evaluate metering. Setting `--end-time` is also required. Accepts `YYYY-MM-DD`. +- `--end-time`: Specifies the end date to evaluate metering. Setting `--start-time` is also required. Accepts `YYYY-MM-DD`. + +### Meter Slack for the last 10 days + +```shell-session +$ vault-radar meter slack users -o .csv --days 10 +``` + +### Meter specific Channel in Slack + +```shell-session +$ vault-radar meter slack users -o .csv --days 10 --channels , +``` + +### Meter Slack workspace from some date until another date + +```shell-session +$ vault-radar meter slack users -o .csv --start-time --end-time +``` diff --git a/content/hcp-docs/content/docs/vault-radar/cli/scan/aws-parameter-store.mdx b/content/hcp-docs/content/docs/vault-radar/cli/scan/aws-parameter-store.mdx new file mode 100644 index 0000000000..7b13f31b54 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/scan/aws-parameter-store.mdx @@ -0,0 +1,159 @@ +--- +page_title: scan aws-parameter-store command +description: |- + The scan aws-parameter-store command is used for scanning parameters of type String and StringList in AWS Parameter Store. +--- + +# scan aws-parameter-store + +@include 'beta-feature.mdx' + +@include 'vault-radar/version-requirement.mdx' + +The `scan aws-parameter-store` command is used for scanning parameters of type `String` and +`StringList` AWS Parameter Store. + + + +Parameters of type `SecureString` will not be scanned as they are secure by definition. + + + +## Authentication + +The `scan aws-parameter-store` command needs permissions to read the parameter, +its history and tags, see the following simplified policy document. + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "ssm:DescribeParameters", + "ssm:GetParameterHistory", + "ssm:GetParameter", + "ssm:GetParameters", + "ssm:ListTagsForResource" + ], + "Resource": "*" + } + ] +} +``` + +See [AWS Authentication](/hcp/docs/vault-radar/cli/configuration/aws-authentication) for +more information on how to authenticate with AWS. + +## Usage + + + +```plaintext +Usage: vault-radar scan aws-parameter-store [options] +``` + + + +### Command options + +- `--region, -r`: Specifies the region of AWS Parameter Store to scan (required) +- `--outfile , -o`: Specifies the file to store information about found secrets (required) +- `--skip-history`: If specified, scans only the most recent version of the parameters. Default is to scan all available versions +- `--format, -f`: Specifies the output format, csv and json are supported. Defaults to csv +- `--index-file`: Specifies the index file path to use in order to determine which risks are Vaulted +- `--baseline, -b`: Specifies the file with previous scan results. Only new secrets will be reported. +- `--limit, -l`: Specifies the maximum number of secrets to be reported. The scan will stop when the limit is reached +- `--parameter-limit`: Specifies the maximum number of parameters to be scanned. The scan will stop when the limit is reached +- `--disable-ui`: Specifies that the scan summary should not be logged to stdout +- `--skip-activeness`: If specified, skips activeness checks + +### Scan latest version of parameters + +To scan latest version of all parameters within a region and write the results +to a CSV file (default format). + +```shell-session +$ vault-radar scan aws-parameter-store -r \ + -o .csv \ + --skip-history +``` + +### Scan latest version of parameters and output in JSON + +To scan latest version of all parameters within a region and write the results +in [JSON Lines](https://jsonlines.org/) format + +```shell-session +$ vault-radar scan aws-parameter-store -r \ + -o .jsonl \ + --skip-history \ + -f json +``` + +### Scan all versions of parameters + +To scan all the available versions of all parameters within a region + +```shell-session +$ vault-radar scan aws-parameter-store -r \ + -o .csv +``` + +### Scanning using a baseline file + +Perform a scan using a previous scan's result and write the new changes to an +outfile. With `-b` option, only new risks, risks that were not found in the +previous scan will be reported. + +```shell-session +$ vault-radar scan aws-parameter-store -r \ + -b .csv \ + -o .csv +``` + +Note: it is expected that previous and current scans are "similar", +e.g. both either latest version or history scans and same output format + +### Scanning using a Vault index file + +Perform a scan using a generated vault index and write the results to an +outfile. In this mode, if a risk was previously found in Vault, the scan results +will report the location in Vault as well. + +```shell-session +$ vault-radar scan aws-parameter-store -r \ + -o .csv \ + --index-file .jsonl +``` + +[How to generate a Vault Index](/hcp/docs/vault-radar/cli/index/vault#index-generation) + +### HCP connection scanning behavior + +The scan commands require an HCP cloud connection to ensure +that hashes are generated using a shared salt from the +cloud keeping consistency across scans. In order to populate the HCP connection +information needed, refer to the [HCP +upload](/hcp/docs/vault-radar/cli/configuration/upload-results-to-hcp) page. + +### Scan and restrict the number of secrets found + +To stop scanning when the defined number of secrets are found + +```shell-session +$ vault-radar scan aws-parameter-store -r \ + -o .csv \ + -l +``` + +### Scan and restrict the number of parameters scanned + +To stop scanning when the defined number of parameters are scanned. + +```shell-session +$ vault-radar scan aws-parameter-store -r \ + -o .csv \ + --parameter-limit +``` diff --git a/content/hcp-docs/content/docs/vault-radar/cli/scan/aws-s3.mdx b/content/hcp-docs/content/docs/vault-radar/cli/scan/aws-s3.mdx new file mode 100644 index 0000000000..e6462d3e5e --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/scan/aws-s3.mdx @@ -0,0 +1,139 @@ +--- +page_title: scan aws-s3 command +description: |- + The scan aws-s3 command is used for scanning an Amazon S3 bucket. +--- + +# scan aws-s3 + +@include 'beta-feature.mdx' + +@include 'vault-radar/version-requirement.mdx' + +The `scan aws-s3` command is used for scanning an Amazon S3 bucket. + +## Authentication + +The `scan aws-s3` comand needs permissions to list and get the objects in the +bucket, see this simplified policy document below. + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "s3:ListBucket", + "s3:GetObject" + ], + "Resource": "*" + } + ] +} +``` + +The `Resource` in the policy can be limited to the buckets that one wants to +scan. + +See [AWS Authentication](/hcp/docs/vault-radar/cli/configuration/aws-authentication) for +more information on how to authenticate with AWS. + +## Usage + + + +```plaintext +Usage: vault-radar scan aws-s3 [options] +``` + + + +### Scan all objects in a bucket + +To scan all objects within a bucket and write the results to a CSV file (default +format). + +```shell-session +$ vault-radar scan aws-s3 --bucket -r \ + -o .csv +``` + +### Scan all objects in a bucket and output in JSON + +To scan all objects within a bucket and write the results in [JSON +Lines](https://jsonlines.org/) format. + +```shell-session +$ vault-radar scan aws-s3 --bucket -r \ + -o .jsonl \ + -f json +``` + +### Scan all objects in a bucket with prefix + +To scan all objects within a bucket beginning with a prefix. + +```shell-session +$ vault-radar scan aws-s3 --bucket -r \ + --prefix \ + -o .jsonl \ + -f json +``` + +### Scanning using a baseline file + +Perform a scan using a previous scan's result and write the new changes to an +outfile. With `-b` option, only new risks, risks that were not found in the +previous scan will be reported. + +```shell-session +$ vault-radar scan aws-s3 --bucket -r \ + -b .csv \ + -o .csv +``` + +Note: it is expected that previous and current scans are "similar", +e.g. both either latest version or history scans and same output format + +### Scanning using a Vault index file + +Perform a scan using a generated vault index and write the results to an +outfile. In this mode, if a risk was previously found in Vault, the scan results +will report the location in Vault as well. + +```shell-session +$ vault-radar scan aws-s3 --bucket -r \ + -o .csv \ + --index-file .jsonl +``` + +[How to generate a Vault Index](/hcp/docs/vault-radar/cli/index/vault#index-generation) + +### HCP connection scanning behavior + +The scan commands require an HCP cloud connection to ensure +that hashes are generated using a shared salt from the +cloud keeping consistency across scans. In order to populate the HCP connection +information needed, refer to the [HCP +upload](/hcp/docs/vault-radar/cli/configuration/upload-results-to-hcp) page. + +### Scan and restrict the number of secrets found + +To stop scanning when the defined number of secrets are found. + +```shell-session +$ vault-radar scan aws-s3 --bucket -r \ + -o .csv \ + -l +``` + +### Scan and restrict the number of objects scanned + +To stop scanning when the defined number of objects are scanned. + +```shell-session +$ vault-radar scan aws-s3 --bucket -r \ + -o .csv \ + --object-limit +``` diff --git a/content/hcp-docs/content/docs/vault-radar/cli/scan/ci/overview.mdx b/content/hcp-docs/content/docs/vault-radar/cli/scan/ci/overview.mdx new file mode 100644 index 0000000000..6e9c8456a6 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/scan/ci/overview.mdx @@ -0,0 +1,53 @@ +--- +page_title: scan ci command overview +description: |- + The scan ci command is used to enable scanning content in a continuous integration workflow. +--- + +# scan ci + +The `scan ci` command is used to enable scanning content in a continuous integration workflow. + + + +## Usage + + + +```plaintext +Usage: vault-radar scan ci [subcommand] +``` + + + +### Command Options + +- `pr`: Scans a git repository branch/pr for a CI/CD workflow +- `tip`: Scans the tip of a git repository branch for a CI/CD workflow + + +### Example Vault Radar CI configurations + +Your `HCP_PROJECT_ID`, `HCP_CLIENT_ID`, and `HCP_CLIENT_SECRET` from your project are needed to use the `vault-radar scan ci` commands. These values will need to be available to the workflow runner as environment variables. + + + + + +@include 'vault-radar/cicd/github-cicd-example.mdx' + + + + + +@include 'vault-radar/cicd/gitlab-cicd-example.mdx' + + + + + +@include 'vault-radar/cicd/bitbucket-cicd-example.mdx' + + + + diff --git a/content/hcp-docs/content/docs/vault-radar/cli/scan/ci/pr.mdx b/content/hcp-docs/content/docs/vault-radar/cli/scan/ci/pr.mdx new file mode 100644 index 0000000000..5aa4739948 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/scan/ci/pr.mdx @@ -0,0 +1,82 @@ +--- +page_title: scan ci command overview +description: |- + The scan ci pr command is used for scanning pull request or branch changes in a continuous integration workflow. +--- + +# scan ci pr + +The `scan ci pr` command is used for scanning pull request or branch changes in a continuous integration workflow. + +## Authentication + +The command is intended to be used offline as part of a CI workflow within an application (such as GitHub). There should not be any additional Auth needed. + + +## Usage + + + +```plaintext +Usage: vault-radar scan ci pr [options] +``` + + +### Command Options + +- `--clone-dir, -c`: Define a path to a clone of the repository. If not defined, the current directory is used. +- `--head-ref, -r`: Define the head ref or source branch of the PR (required) +- `--base-ref, -b`: Define the base ref or target branch of the PR (required) +- `--ref-name, -n`: Define the source branch name of the PR +- `--outfile, -o`: Define the location to a file where information about found secrets will be stored. +- `--format, -f`: Define the output format. Supported values: `csv`, `json`, and `sarif`. `json` is the default if this is option is not defined. +- `--fail-severity, -s`: Define a severity level that will cause the command to fail if any risks are found with a severity level equal to or higher than defined one. +- `--fail-not-latest`: When toggled, this will cause the command to fail when a found risk is not part of the latest version. +- `--log-path, -l`: Define a path to a file to where logging will be written to. +- `--skip-ignored`: Enables skipping risks with the ignore tag. +- `--skip-not-latest`: Toggles skipping evaluaton of earlier versions of a risk. +- `--pretty, -p`: Define how to output information about found risk. +- `--summary-pretty`: Define how to output summary about all found risks. Supported values are: `markdown`. Defaults to skipping the summary output. +- `--summary-outfile`: Define a file to output the summary to. Defaults to stdout. + +### Simple CI Scan + + + +``` +vault-radar scan ci pr --head-ref HEAD_REF --base-ref BASE_REF --ref-name REFERENCE_NAME +``` + + + +### CI Scan That Fails When A High Severity Risk Is Found + + + +``` +vault-radar scan ci pr --head-ref HEAD_REF --base-ref BASE_REF --ref-name REFERENCE_NAME -s high +``` + + + +### CI Scan That Skips Ignored Errors + + + +``` +vault-radar scan ci pr --head-ref HEAD_REF --base-ref BASE_REF --ref-name REFERENCE_NAME -s high --skip-ignored +``` + + + +### CI Scan That Outputs Information In GHA Format + + + +``` +vault-radar scan ci pr --head-ref HEAD_REF --base-ref BASE_REF --ref-name REFERENCE_NAME --pretty=gha_pr +``` + + + + diff --git a/content/hcp-docs/content/docs/vault-radar/cli/scan/ci/tip.mdx b/content/hcp-docs/content/docs/vault-radar/cli/scan/ci/tip.mdx new file mode 100644 index 0000000000..e674d97fa6 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/scan/ci/tip.mdx @@ -0,0 +1,46 @@ +--- +page_title: scan ci tip command +description: |- + The scan ci tip command is used for scanning the tip of a branch in a continous integration workflow. +--- + +# scan ci tip + +The `scan ci tip` command is used for scanning the tip of a branch in a continous integration workflow. + +## Authentication + +The command is intended to be used offline as part of a CI workflow within an application (such as GitHub). There should not be any additional Auth needed. + +## Usage + + + +```plaintext +Usage: vault-radar scan ci tip [options] +``` + + + +### Command Options + +- `--clone-dir, -c`: Define a path to a clone of the repository. If not defined, the current directory is used. +- `--outfile, -o`: Define the location to a file where information about found secrets will be stored. +- `--format, -f`: Define the output format. Supported values: `csv`, `json`, and `sarif`. `json` is the default if this is option is not defined. +- `--fail-severity, -s`: Define the severity level of found risks that will cause the command to fail. Supported values: `info`, `low`, `medium`, `high`, and `critical`. +- `--log-path, -l`: Define the path to a file where logging will be output to. +- `--skip-ignored`: Specifies that risks with the ignore tag should be skipped. +- `--pretty, -p`: Define how to output information about found risk. +- `--summary-pretty`: Define how to output summary about all found risks. Supported values are: `markdown`. Defaults to skipping the summary output. +- `--summary-outfile`: Define the file to output the summary to. Defaults to stdout. + +### Tip of Branch Scan +This scan will fail when a risk of `high` severity is found, output information about found risks to a file `vault-radar.jsonl`, log information to a file `vault-radar.log` and output the results to stdout in a format for GitHub Actions. + + + +```plaintext +vault-radar scan ci tip -s high -o vault-radar.jsonl -l vault-radar.log --pretty=gha +``` + + diff --git a/content/hcp-docs/content/docs/vault-radar/cli/scan/confluence.mdx b/content/hcp-docs/content/docs/vault-radar/cli/scan/confluence.mdx new file mode 100644 index 0000000000..596364f79e --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/scan/confluence.mdx @@ -0,0 +1,191 @@ +--- +page_title: scan confluence command +description: |- + The scan confluence command is used for scanning a Confluence Data Server or Atlassian Confluence Cloud instance. +--- + +# scan confluence + +@include 'vault-radar/version-requirement.mdx' + +The `scan confluence` command is used for scanning a Confluence Data Server or +Atlassian Confluence Cloud instance. + +## Authentication + +`vault-radar` needs some authentication credentials in order to be able to make +requests to the Confluence instance. The information needed depends on whether +you are using Confluence Cloud or Server (self hosted). + +### Confluence Cloud + +This means your instance is hosted by Atlassian, and your instance URL should +have ".atlassian.net" in it. + +For cloud, there's only one supported pattern and it requires an [Atlassian API +Token](https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/) +and the email of the account that the token belongs to. + +In order to provide the information to `vault-radar`, assign the appropriate +values to both of these environment variables: + +1. `ATLASSIAN_API_TOKEN` +1. `ATLASSIAN_ACCOUNT_EMAIL` + +### Confluence Server + +For self hosted versions of Confluence, there are up to 2 different patterns +possible. + +Versions 7.9 and higher support [creating a Personal Access Token for a +user](https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html). +The token will have all the same access rights as the user who creates it. To +use the token set the following environment variable to the generated token: +`CONFLUENCE_PERSONAL_ACCESS_TOKEN` + +Using a personal access token is more secure and should be the preferred access +pattern. A personal access token is easier to revoke and regenerate, and +generally has a smaller blast radius than a password. + +All versions of Confluence server supports authorization using the Username (not +the email), and Password. To authenticate using these credentials set both of +these environment variables: + +1. `CONFLUENCE_USERNAME` +1. `CONFLUENCE_PASSWORD` + +## Usage + + + +```plaintext +Usage: vault-radar scan confluence [options] +``` + + + +### Command options + +- `--url, -u`: The url endpoint of the Confluence server to scan (required) +- `--page-id, -p`: Specifies the Confluence page to scan +- `--space-key, -s`: Specifies the Confluence space to scan +- `--outfile, -o`: Specifies the file to store information about found secrets (required for offline only) +- `--format, -f`: Specifies the output format, csv and json are supported. Defaults to csv +- `--baseline, -b`: Specifies the file with previous scan results. Only new secrets will be reported. +- `--limit, -l`: Specifies the maximum number of secrets to be reported. The scan will stop when the limit is reached +- `--page-limit`: Specifies the maximum number of Confluences pages to scan +- `--index-file`: Specifies the index file path to use in order to determine which risks are Vaulted +- `--disable-ui`: Specifies that the scan summary should not be logged to stdout +- `--skip-activeness`: If specified, skips activeness checks + +The following examples all assume you have already set the appropriate +environment variable or that you intend to include them as part of the command +you run. + +### Scanning a space + +Scan a space and upload results to HCP. + +```shell-session +$ vault-radar scan confluence -u -s +``` + +### Scanning a page + +Scan a page and write the results to an outfile in CSV format, this is the +default format for output. + +```shell-session +$ vault-radar scan confluence -u \ + -p \ + -o .csv +``` +### Scanning a page and output JSON + +Scan a page and write the results to an outfile in JSON format. + +```shell-session +$ vault-radar scan confluence -u -p \ + -o .json \ + -f json +``` + +### Scanning using a baseline file + +Perform a scan using a previous scan's result and write the new changes to an +outfile. + +```shell-session +$ vault-radar scan confluence -u -s \ + -b \ + -o .csv +``` + +### HCP connection scanning behavior + +The scan commands require an HCP cloud connection to ensure +that hashes are generated using a shared salt from the +cloud keeping consistency across scans. In order to populate the HCP connection +information needed, refer to the [HCP +upload](/hcp/docs/vault-radar/cli/configuration/upload-results-to-hcp) page. + +### Scanning using a Vault index file + +Perform a scan using a generated vault index and upload results to HCP. + +```shell-session +$ vault-radar scan confluence -u -s \ + --index-file .jsonl +``` +[How to generate a Vault Index](/hcp/docs/vault-radar/cli/index/vault#index-generation) + +### Scan and restrict the number of pages scanned + +Stop scanning the space after a defined number of pages are scanned. + +```shell-session +$ vault-radar scan confluence -u -s \ + --page-limit +``` + +### Scan and restrict the number of secrets found + +Stop scanning the space when a defined number of secrets are found. + +```shell-session +$ vault-radar scan confluence -u -s \ + -l +``` + +## Troubleshooting help + +### What's the PageID for my page? + +Sometimes you will see the "Pretty" URL which includes the Page Name. If you +want the page's ID, in the right corner there should be an options menu for the +page. It will usually look like 3 dots `...`. Click on that, and then look for +an option like `Page Information` and select that. The URL of the page you land +on, should use the PageID in the URL. + +**Example:** + +``` +http://localhost:8090/pages/viewinfo.action?pageId=123456 +``` + +Where `123456` is this example page's ID. + +### What's the Space Key for my space or page? + +The space key is not always included in the URl of a Page, but it should always +be present when selecting the space you are interested in from the main +Confluence toolbar. Additionally going to the space's summary details, should +explicity define the space key. + +**Example:** + +``` +http://localhost:8090/display/VSID/Some+Page +``` + +Where `VSID` is the space key. diff --git a/content/hcp-docs/content/docs/vault-radar/cli/scan/docker-image.mdx b/content/hcp-docs/content/docs/vault-radar/cli/scan/docker-image.mdx new file mode 100644 index 0000000000..36a15df7c9 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/scan/docker-image.mdx @@ -0,0 +1,116 @@ +--- +page_title: scan docker-image command +description: |- + The `scan docker-image` command is used for scanning a Docker image. +--- + +# scan docker-image + +@include 'beta-feature.mdx' + +@include 'vault-radar/version-requirement.mdx' + +The `scan docker-image` command is used for scanning a Docker image. + + + +This command only works with Docker Engine version 24. + + + +## Usage + + + +```plaintext +Usage: vault-radar scan docker-image [options] +``` + + + +### Scanning a docker image + +Scan a public docker image (or a private image that is already pulled/dowloaded +locally) and write the results to a file in CSV format, this is the default +format for output. + +Image reference may optionally include a tag. We will scan the latest tag if no +tag is specified. + +[Docker engine](https://docs.docker.com/engine/install/) is a pre-requisite for +scanning docker images using vault-radar. Docker version 24.x is required. + +```shell-session +$ vault-radar scan docker-image -i -o .csv +``` + +### Scanning a private docker image + +To scan a private docker image, specify the following environment variables to +authenticate against the registry: + +1. `DOCKER_REGISTRY_USERNAME` +1. `DOCKER_REGISTRY_PASSWORD` + +```shell-session +$ vault-radar scan docker-image -i -o .csv +``` + +**Example:** + +First, set the username and password as an environment variable. + +```shell-session +$ export DOCKER_REGISTRY_USERNAME= +$ export DOCKER_REGISTRY_PASSWORD= +``` + +Scan `XXX.artifactory.XXX/YYY-image` image. + +```shell-session +$ vault-radar scan docker-image -i XXX.artifactory.XXX/YYY-image \ + -o results-docker-image.csv +``` + +### Scanning a docker image and output in JSON + +Scan a docker image and write the results to a file in [JSON +Lines](https://jsonlines.org/) format. + +```shell-session +$ vault-radar scan docker-image -i \ + -o .jsonl \ + -f json +``` + +### HCP connection scanning behavior +The scan commands require an HCP cloud connection to ensure +that hashes are generated using a shared salt from the +cloud keeping consistency across scans. In order to populate the HCP connection +information needed, refer to the [HCP +upload](/hcp/docs/vault-radar/cli/configuration/upload-results-to-hcp) page. + +### Scanning using a Vault index file + +Perform a scan using a generated Vault index and write the results to an output +file. In this mode, if a risk was previously found in Vault, the scan results +will report the location in Vault as well. + +```shell-session +$ vault-radar scan docker-image -i \ + -o .csv \ + --index-file .jsonl +``` + +[How to generate a Vault Index](/hcp/docs/vault-radar/cli/index/vault#index-generation) + +### Scan and restrict the number of secrets found + +Scan a docker image and write the results to an output file and stop scanning +when the defined number of secrets are found. + +```shell-session +$ vault-radar scan docker-image -i \ + -o .csv \ + -l +``` diff --git a/content/hcp-docs/content/docs/vault-radar/cli/scan/file.mdx b/content/hcp-docs/content/docs/vault-radar/cli/scan/file.mdx new file mode 100644 index 0000000000..b6e6b9f02e --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/scan/file.mdx @@ -0,0 +1,98 @@ +--- +page_title: scan file command +description: |- + The `scan file` command is used for scanning a file. +--- + +# scan file + +@include 'beta-feature.mdx' + +@include 'vault-radar/version-requirement.mdx' + +The `scan file` command is used for scanning a file. It is similar to +[scan folder](/hcp/docs/vault-radar/cli/scan/folder) command but can scan a +single file. One difference though is that it can read data from standard input. + +## Usage + + + +```plaintext +Usage: vault-radar scan file [options] +``` + + + +### Scanning a file + +Scan a file and write the results to a file in CSV format, this is the default +format for output. + +```shell-session +$ vault-radar scan file -p -o .csv +``` + +### Scanning a file and output in JSON + +Scan a file and write the results to a file in [JSON +Lines](https://jsonlines.org/) format. + +```shell-session +$ vault-radar scan file -p -o .jsonl -f json +``` + +### Read data from stdin + +Scan data coming from stdin. The `--name` parameter can be used to name data +coming from stdin, and it will be used in secret URI in the output file. + +```shell-session +$ echo "password abcABC123" | vault-radar scan file \ + -o .csv \ + --name +``` + +### Scanning using a baseline file + +Perform a scan using a previous scan's result and write the new changes to an +outfile. With `-b` option, only new risks, risks that were not found in the +previous scan will be reported. + +```shell-session +$ vault-radar scan file -p \ + -b .csv \ + -o .csv +``` + +### HCP connection scanning behavior + +The scan commands require an HCP cloud connection to ensure +that hashes are generated using a shared salt from the +cloud keeping consistency across scans. In order to populate the HCP connection +information needed, refer to the [HCP +upload](/hcp/docs/vault-radar/cli/configuration/upload-results-to-hcp) page. + +### Scanning using a Vault index file + +Perform a scan using a generated vault index and write the results to an +outfile. In this mode, if a risk was previously found in Vault, the scan results +will report the location in Vault as well. + +```shell-session +$ vault-radar scan file -p -o .csv \ + --index-file .jsonl +``` + +[How to generate a Vault Index](/hcp/docs/vault-radar/cli/index/vault#index-generation) + +### Scan and restrict the number of secrets found + +Scan a clone and write the results to an outfile and stop scanning when the +defined number of secrets are found. + +```shell-session +$ vault-radar scan file -p \ + -o .csv \ + -l +``` diff --git a/content/hcp-docs/content/docs/vault-radar/cli/scan/folder.mdx b/content/hcp-docs/content/docs/vault-radar/cli/scan/folder.mdx new file mode 100644 index 0000000000..6021a56118 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/scan/folder.mdx @@ -0,0 +1,116 @@ +--- +page_title: scan folder command +description: |- + The `scan folder` command is used for scanning a local folder. +--- + +# scan folder + +@include 'beta-feature.mdx' + +@include 'vault-radar/version-requirement.mdx' + +The `scan folder` command is used for scanning a local folder. + +## Usage + + + +```plaintext +Usage: vault-radar scan folder [options] +``` + + + +### Command options + +- `--path, -p`: If specified scans the given folder, otherwise it scans current working dir +- `--outfile, -o`: Specifies the file to store information about found secrets (required) +- `--format, -f`: Specifies the output format, csv and json are supported. Defaults to csv +- `--baseline, -b`: Specifies the file with previous scan results. Only new secrets will be reported. +- `--limit, -l`: Specifies the maximum number of secrets to be reported. The scan will stop when the limit is reached +- `--host-name`: Specifies the host name to use in risk URI, defaults to local hostname +- `--path-prefix`: Specifies the path prefix to use in risk URI. If not specified, then full local path will be used +- `--index-file`: Specifies the index file path to use in order to determine which risks are Vaulted + +### Scanning a folder + +Scan a folder and write the results to a file in CSV format, this is the default +format for output. + +```shell-session +$ vault-radar scan folder -p -o .csv +``` + +### Scanning a folder and output in JSON + +Scan a folder and write the results to a file in [JSON +Lines](https://jsonlines.org/) format. + +```shell-session +$ vault-radar scan folder -p \ + -o .jsonl \ + -f json +``` + +### HCP connection scanning behavior + +The scan commands require an HCP cloud connection to ensure +that hashes are generated using a shared salt from the +cloud keeping consistency across scans. In order to populate the HCP connection +information needed, refer to the [HCP +upload](/hcp/docs/vault-radar/cli/configuration/upload-results-to-hcp) page. + +### Scanning using a baseline file + +Perform a scan using a previous scan's result and write the new changes to an +outfile. With `-b` option, only new risks, risks that were not found in the +previous scan will be reported. + +```shell-session +$ vault-radar scan folder -p \ + -b .csv \ + -o .csv +``` + +### Scanning using a Vault index file + +Perform a scan using a generated vault index and write the results to an output +file. In this mode, if a risk was previously found in Vault, the scan results +will report the location in Vault as well. + +```shell-session +$ vault-radar scan folder -p \ + -o .csv \ + --index-file .jsonl +``` + +[How to generate a Vault Index](/hcp/docs/vault-radar/cli/index/vault#index-generation) + +### Scan and restrict the number of secrets found + +Scan a clone and write the results to an outfile and stop scanning when the +defined number of secrets are found. + +```shell-session +$ vault-radar scan folder -p \ + -o .csv \ + -l +``` + +### Modify the secret URI in the output file + +By default, the secret URI in the result file will be the full local file path +where the secret has been found. If the results from scan runs on different +machines must be combined for further analysis, `--host-name` and +`--path-prefix` options could be used. `--host-name` specifies the host name to +use in secret URI, defaults to local hostname. `--path-prefix` specifies the +path prefix to use in secret URI. If not specified, then full local path will be +used. + +```shell-session +$ vault-radar scan folder -p \ + -o .csv \ + --host-name \ + --path-prefix +``` diff --git a/content/hcp-docs/content/docs/vault-radar/cli/scan/git/pre-receive/bitbucket.mdx b/content/hcp-docs/content/docs/vault-radar/cli/scan/git/pre-receive/bitbucket.mdx new file mode 100644 index 0000000000..55b96cb51f --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/scan/git/pre-receive/bitbucket.mdx @@ -0,0 +1,21 @@ +--- +page_title: Bitbucket pre-receive hook +description: |- + Information on how to configure the pre-receive hook for Bitbucket. +--- + +# Bitbucket pre-receive hook + + + +[The recommended way for Bitbucket Server to create a hook plugin is to use Java APIs.](https://confluence.atlassian.com/bitbucketserver0810/using-repository-hooks-1236442291.html) This is unsupported at this time. + + + +The following are required to be uploaded to the server first: +- The vault-radar CLI executable. +- A [valid Vault Radar license](/hcp/docs/vault-radar/cli/scan/git/pre-receive/overview#Prerequisites). +- A [config file](/hcp/docs/vault-radar/cli/scan/git/pre-receive/overview#Configuration). +- A hook script. + +Refer to the [Bitbucket documentation for more information](https://confluence.atlassian.com/bitbucketserverkb/how-to-create-a-simple-hook-in-bitbucket-data-center-and-server-779171711.html) that supports your Bitbucket version and deployment model. diff --git a/content/hcp-docs/content/docs/vault-radar/cli/scan/git/pre-receive/github.mdx b/content/hcp-docs/content/docs/vault-radar/cli/scan/git/pre-receive/github.mdx new file mode 100644 index 0000000000..b2e68fc159 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/scan/git/pre-receive/github.mdx @@ -0,0 +1,111 @@ +--- +page_title: GitHub pre-receive hook +description: |- + Information on how to configure the pre-receive hook for GitHub. +--- + +# GitHub pre-receive hook + +Setting up and configuring pre-receive hooks is tightly coupled with the server implemenation, as a result it may be helpful to refer to [the GitHub documentation](https://docs.github.com/en/enterprise-server@3.9/admin/enforcing-policies/enforcing-policy-with-pre-receive-hooks/about-pre-receive-hooks) for more information. The following is an example of how to set up a pre-receive hook using vault-radar for GitHub Enterprise Server (GHES). + +## Install `vault-radar` + +The `vault-radar` CLI has to be installed or uploaded to the GitHub server where the hook will be run. There are a few ways this can be done. The easiest is to include the binary in the same repository where the hook script is saved. + +Alternatively, the CLI can be installed in a chroot environment. This way the CLI will be a part of the chroot environment and will be available as a global command in the pre-receive script. + +### Commit the CLI as part of the repo + +This is done by putting the CLI binary in the hook repository itself. + + + +The CLI must be uploaded as a regular file and not Git LFS file. LFS files are not properly checked out by GHES and the hook will fail. + + + + + +Set the `git config http.postBuffer 157286400` option might to increase the buffer size for large files. + + + +### Create new chroot environment +Overall GHES instructions are [here](https://docs.github.com/en/enterprise-server@3.9/admin/enforcing-policies/enforcing-policy-with-pre-receive-hooks/creating-a-pre-receive-hook-environment#creating-a-pre-receive-hook-environment-using-chroot). + +Below are steps to create a new chroot environment based on the guide above. + +1. Download a version from https://releases.hashicorp.com/vault-radar/ + +1. Create a `./Dockerfile.pre-receive-env` similar to the one below: + + ```Dockerfile + FROM alpine:3.3 + RUN apk add --update --no-cache git bash + + # TARGETARCH and TARGETOS are set automatically when --platform is provided. + ARG TARGETOS + ARG TARGETARCH + + COPY dist/linux/$TARGETARCH/vault-radar /bin/ + ``` + +1. Use `./Dockerfile.pre-receive-env` as a base image and add the CLI to it: + + ```shell-session + $ docker build -f ./Dockerfile.pre-receive-env -t github-pre-receive.alpine --platform=linux/amd64 . + ``` + + ```shell-session + $ docker create --name github-pre-receive.alpine github-pre-receive.alpine /bin/true + ``` + + ```shell-session + $ docker export github-pre-receive.alpine | gzip > github-pre-receive.alpine.tar.gz + ``` + +1. Then upload `github-pre-receive.alpine.tar.gz` to GHES. + +1. After that it can be imported as new GHES hook environment in Settings > Pre-receive environments. + +## Hook repo + +on GHES, pre-receive hooks is stored in a Git repository. The repo must contain: + +- the hook script itself +- the vault-radar license +- the vault-radar config file +- optionally the CLI binary + + + +The license and config file could be part of the hook environment, similar to CLI itself. But having them in repo simplifies the setup. + + + +### Example hook script + +Here is an example of a hook script used in a GitHub Enterprise Server environment: + +```bash +#!/bin/bash + +git=$(which git) +export GIT=$git + +# Get the directory of the script +# this is needed to properly pass the location of the license file and config file to vault-radar +script_dir=$(dirname "$(realpath "$0")") + +# Set the HOME environment variable to the githook user's home directory +# on GHES hook env HOME var is not set +# without it vault-radar will fail +export HOME=/home/githook + +export VAULT_RADAR_LICENSE_PATH=$script_dir/vault-radar.hclic +export VAULT_RADAR_CONFIG_PATH=$script_dir/config.json + +exec $script_dir/vault-radar scan git pre-receive +``` + +The script above will use `vault-radar` executable, license and config from the repo. diff --git a/content/hcp-docs/content/docs/vault-radar/cli/scan/git/pre-receive/gitlab.mdx b/content/hcp-docs/content/docs/vault-radar/cli/scan/git/pre-receive/gitlab.mdx new file mode 100644 index 0000000000..3b2200e329 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/scan/git/pre-receive/gitlab.mdx @@ -0,0 +1,15 @@ +--- +page_title: GitLab pre-receive hook +description: |- + Information on how to configure the pre-receive hook for GitLab. +--- + +# GitLab pre-receive hook + +The following are required to be uploaded to the server first: +- The vault-radar CLI executable. +- A [valid Vault Radar license](/hcp/docs/vault-radar/cli/scan/git/pre-receive/overview#Prerequisites). +- A [config file](/hcp/docs/vault-radar/cli/scan/git/pre-receive/overview#Configuration). +- A hook script. + +Depending on the GitLab version and how the server is deployed there can be many ways to do this. [See GitLab's documentation for more information.](https://docs.gitlab.com/ee/administration/server_hooks.html) diff --git a/content/hcp-docs/content/docs/vault-radar/cli/scan/git/pre-receive/overview.mdx b/content/hcp-docs/content/docs/vault-radar/cli/scan/git/pre-receive/overview.mdx new file mode 100644 index 0000000000..6c04c9587e --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/scan/git/pre-receive/overview.mdx @@ -0,0 +1,177 @@ +--- +page_title: scan git pre-receive command overview +description: |- + The scan git pre-receive command is used to enable scanning content in a Git pre-receive hook. +--- + +# scan git pre-receive + +The `scan git pre-receive` command is used to enable scanning content in a [Git pre-receive hook](https://git-scm.com/docs/githooks/2.27.0). + +Pre-receive hooks are only available in some Git hosting services. Please check with your Git hosting service to see if pre-receive hooks are supported. For example, GitHub Cloud does not support pre-receive hooks, but the self-hosted version of GitHub, GitHub Enterprise, does support pre-receive hooks. + +Because pre-receive hooks are run on the server side, they can be used to enforce policies centrally. For example, you can use a pre-receive hook to enforce that all code changes are scanned for secrets before they are accepted into the repository. + +Most implementations of pre-receive hooks have a timeout to ensure the hook's execution does not block the Git operation for too long. If the hook takes too long to execute, the operation will be aborted and likely result in the commit being rejected by the server. The `scan git pre-receive` command is designed to require a user to configure the exact set of risks they want to scan. The recommendation is to limit the set of risks to a small number of patterns you know are relevant to your use cases. By default, the scan will not check whether a secret is active or not. This is to ensure that the scan does not take too long to execute because most activeness checks require making a network call, which can add a lot of time to the overall evaluation. + +For larger scans, consider using a `pre-commit` hook instead with the [`vault-radar` CLI.](/hcp/docs/vault-radar/cli/install/git) + +## Prerequisites + +The `scan git pre-receive` command requires a valid license. Please reach out to your customer support contact for help generating a license. + +The license can be saved: + +- As the environment variable `VAULT_RADAR_LICENSE` +- In a file defined by `VAULT_RADAR_LICENSE_PATH` +- In a file at the default path `$HOME/.hashicorp/vault-radar/vault-radar.hclic` + +Additionally, `vault-radar` must be installed on the server where the Git repository is hosted. Installation instructions can be found [here](/hcp/docs/vault-radar/cli/index). + +## Configuration + +The `scan git pre-receive` command is configured using a configuration file. By default Vault Radar will look for the file at `$HOME/.hashicorp/vault-radar/config.json`. This can be overridden by setting the `VAULT_RADAR_CONFIG_PATH` environment variable to the path of the configuration file. + +An example of the configuration file contents is shown below using the default values: + +```json +{ + "pre_receive_skip_activeness": true, + "pre_receive_fail_severity":"medium", + "pre_receive_risk_allowlist": [] +} +``` + + + +The command requires `pre_receive_risk_allowlist` to be set to a non-empty list of risk types to scan. The example shown would exit with a non-zero status code as a result. + + + +The values for the pre_receive_risk_allowlist option can either be a risk type or risk description. For example: + +```json +{ + "pre_receive_risk_allowlist": [ + "jwt_token", + "GitHub personal access token" + ] +} +``` + +Review the Vault Radar [secret types documentation](/hcp/docs/vault-radar/manage/secret-types) for a list of the supported types and descriptions that can be used in the configuration. + + +### Configuration options + +- `pre_receive_skip_activeness`: `true` results in skipping activeness checks. The default is `true`. Enabling activeness checks will likely add more time to the evaluation, set to `false` with caution. +- `pre_receive_fail_severity`: Specifies the minimum severity of the risk that will cause the scan to fail. The default is `medium`. Additional documentation about severity can be found [here](/hcp/docs/vault-radar/manage/severity). +- `pre_receive_risk_allowlist`: Specifies the list of risk types or descriptions to scan. The scan will only scan for risks in this list. The command will exit with a non-zero status code if the list is empty. + +## Usage + +The command is expected to be called from a pre-receive hook. + +```shell-session +$ vault-radar scan git pre-receive +``` + +## Testing locally + +Since the pre-receive hook is typically run on the central Git server, it's often critical to have confidence that the change will work without disrupting others working on projects. Fortunately, it's possible to test the pre-receive hook locally on a repo. + +This can be done on an existing repo or by creating a new bare repo. In our example we will create a repo within a directory called `pre-receieve-test`. + +1. Make a directory to test in. + + ```shell-session + $ mkdir pre-receieve-test && cd pre-receieve-test + ``` + +1. Create a new bare repo: + + ```shell-session + $ git init --bare repo.git + ``` + +1. Create a `~/hashicorp/vault-radar/config.json` if you do not have one. In the example we use a simple configuration that checks just for JWT tokens: + + ```json + { + "pre_receive_risk_allowlist": [ "jwt_token" ] + } + ``` + +1. Set the pre-receive hook for the project and make it invoke the `vault-radar scan pre-receive` command: + + ```shell-session + $ echo "exec vault-radar scan git pre-receive" > repo.git/hooks/pre-receive + $ chmod +x repo.git/hooks/pre-receive + ``` + + + + In the example we are assuming `vault-radar` is in the PATH. If it's not, you can specify the full path to the `vault-radar` binary. + + + +1. Then clone the repo: + + ```shell-session + $ git clone repo.git repo + ``` + +1. From the clone, create a new file and commit it. In the example we create a file called `jwt.txt` with a JWT token from [jwt.io](https://jwt.io/): + + ```shell-session + $ cd repo + $ echo 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c' >> jwt.txt + $ git add jwt.txt + $ git commit -m "Add jwt token" + ``` + +1. And then finnally push the changes back to the origin repo: + + ```shell-session + $ git push origin main + ``` + +If things are configured correctly there should be output from `vault-radar` indicating the commit was reject because of the JWT token. + + + ```shell-session + git push origin main + Enumerating objects: 3, done. + Counting objects: 100% (3/3), done. + Delta compression using up to 12 threads + Compressing objects: 100% (2/2), done. + Writing objects: 100% (3/3), 380 bytes | 380.00 KiB/s, done. + Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 + remote: error: HC001: Repository rule violations found + remote: + remote: - Hashicorp Vault Radar PUSH PROTECTION + remote: ————————————————————————————————————————— + remote: Resolve the following violations before pushing again + remote: + remote: - Push cannot contain secrets + remote: + remote: (?) Learn how to resolve a blocked push + remote: https://developer.hashicorp.com/hcp/docs/vault-radar/cli/scan/git/pre-receive + remote: + remote: + remote: —— Generic JWT token ——————————————————————————————————————————— + remote: - ref: refs/heads/main + remote: commit: 94b08e4f24065b5c435199d363932d42014f247d + remote: path: jwt.txt:1 + remote: severity: medium + remote: + remote: + remote: + To /Users/someone/pre-receive-test/repo.git + ! [remote rejected] main -> main (pre-receive hook declined) + error: failed to push some refs to '/Users/someone/pre-receive-test/repo.git' + ``` + + + +The exact configuration of a pre-receive hook on a Git server is unique to the provider. Please refer to the documentation for your specific Git server for more information on how to configure a pre-receive hook. diff --git a/content/hcp-docs/content/docs/vault-radar/cli/scan/jira.mdx b/content/hcp-docs/content/docs/vault-radar/cli/scan/jira.mdx new file mode 100644 index 0000000000..d251d7eff0 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/scan/jira.mdx @@ -0,0 +1,165 @@ +--- +page_title: scan jira command +description: |- + The `scan jira` command is used for scanning an Atlassian Jira Cloud instance. +--- + +# scan jira + +@include 'vault-radar/version-requirement.mdx' + +The `scan jira` command is used for scanning an Atlassian Jira Cloud or Jira Server +instance. We currently support scanning the latest version of Jira issue +description and all issue comments. + +## Authentication + +The `vault-radar` needs some authentication credentials in order to be able to +make requests to the Jira Cloud instance. + +### Jira Cloud + +This means your instance is hosted by Atlassian, and your instance URL should +have ".atlassian.net" in it. + +For cloud, there's only one supported patern and it requires an [Atlassian API +Token](https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/) +and the email of the account that the token belongs to. + +In order to provide the information to `vault-radar`, assign the appropriate +values to both of these environment variables: + +1. `ATLASSIAN_API_TOKEN` +1. `ATLASSIAN_ACCOUNT_EMAIL` + +### Jira Server + +For self hosted versions of Jira, there are up to 2 different patterns possible. + +Jira Software Version 8.14 and higher support [creating a Personal Access Token +for a +user](https://developer.atlassian.com/server/jira/platform/personal-access-token/). +The token will have all the same access rights as the user who creates it. To +use the token set the following environment variable to the generated token: +`JIRA_PERSONAL_ACCESS_TOKEN` + +Using a personal access token is more secure and should be the preferred access +pattern. A personal access token is easier to revoke and regenerate, and +generally has a smaller blast radius than a password. + +All versions of Jira server supports authorization using the Username (not the +email), and Password. To authenticate using these credentials set both of these +environment variables: + +1. `JIRA_USERNAME` +1. `JIRA_PASSWORD` + +## Usage + + + +```plaintext +Usage: vault-radar scan jira [options] +``` + + + +### Command options + +- `--url, -u`: The url of the Jira instance to scan (required) +- `--project-key, -p`: Specifies the Jira project to scan +- `--issue-key, -i`: Specifies the Jira issue to scan +- `--outfile, -o`: Specifies the file to store information about found secrets +- `--format, -f`: Specifies the output format, csv and json are supported. Defaults to csv +- `--baseline, -b`: Specifies the file with previous scan results. Only new secrets will be reported. +- `--limit, -l`: Specifies the maximum number of secrets to be reported. The scan will stop when the limit is reached +- `--issue-limit`: Specifies the maximum number of Jira issues to scan +- `--index-file`: Specifies the index file path to use in order to determine which risks are Vaulted +- `--disable-ui`: Specifies that the scan summary should not be logged to stdout +- `--skip-activeness`: If specified, skips activeness checks + +## Examples + +The following examples all assume you have already set the appropriate +environment variable or that you intend to include them as part of the command +you run. + +### Scan an issue + +scan an issue and write the results to an outfile in CSV format, this is the +default format for output. + +```shell-session +$ vault-radar scan jira -u -i -o .csv +``` +### Scan an issue and output JSON + +scan an issue and write the results to an outfile in JSON format. + +```shell-session +$ vault-radar scan jira -u -i \ + -o .json \ + -f json +``` + +### Scan a Project + +Scan a project and write the results to an outfile. + +```shell-session +$ vault-radar scan jira -u -p \ + -o .csv +``` +### Scan using a baseline file + +Perform a scan using a previous scan's result and write the new changes to an +outfile. + +```shell-session +$ vault-radar scan jira -u -p \ + -b \ + -o .csv +``` + +### HCP connection scanning behavior + +The scan commands require an HCP cloud connection to ensure +that hashes are generated using a shared salt from the +cloud keeping consistency across scans. In order to populate the HCP connection +information needed, refer to the [HCP +upload](/hcp/docs/vault-radar/cli/configuration/upload-results-to-hcp) page. + +### Scan using a Vault index file + +Perform a scan using a generated vault index and write the results to an +outfile. + +```shell-session +$ vault-radar scan jira -u -p \ + --index-file .jsonl \ + -o .csv +``` + +[How to generate a Vault Index](/hcp/docs/vault-radar/cli/index/vault#index-generation) + +### Scan and restrict the number of secrets found + +Scan a project and write the results to an outfile and stop scanning when the +defined number of secrets are found. + +```shell-session +$ vault-radar scan jira -u -p \ + -o .csv \ + -l +``` + +### Scan and restrict the number of issues scanned + +Scan a project and write the results to an outfile and stop scanning when the +defined number of issues has been scanned. + +```shell-session +$ vault-radar scan jira -u -p \ + -o .csv \ + --issue-limit +``` diff --git a/content/hcp-docs/content/docs/vault-radar/cli/scan/repo.mdx b/content/hcp-docs/content/docs/vault-radar/cli/scan/repo.mdx new file mode 100644 index 0000000000..3f8a3dfb06 --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/scan/repo.mdx @@ -0,0 +1,165 @@ +--- +page_title: scan repo command +description: |- + The `scan repo` command is used for scanning a git repository. +--- + +# scan repo + +@include 'vault-radar/version-requirement.mdx' + +The `scan repo` command is used for scanning a git repository. + +## Authentication + +The `scan repo` command can either scan an existing repo clone or automatically +clone the repo using provided repo URL. If existing clone is used, then no +authentication needed. If a repo is public, then no authentication is needed. +Otherwise, a git token must be provided, so CLI can clone the repo. CLI will +read the token from `VAULT_RADAR_GIT_TOKEN` environmental variable. The +environment variable value depends on git server provider. For GitHub and +GitLab it can be just a personal access token (PAT). For Bitbucket and Azure +DevOps, it should be in format `:`. + + + +CLI internally uses `https://` to clone the repo and sets HTTP +`username:password` part of the clone URL to the value of +`VAULT_RADAR_GIT_TOKEN`. Contact your git server provider documentation about +the exact format used for `https://` auth in case the format described above +does not work. + + + + +## Usage + + + +```plaintext +Usage: vault-radar scan repo [options] +``` + + + +### Command options + +- `--url, -u`: If specified clones and scans the given repo +- `--clone-dir, -c`: If specified scans the given existing repo clone +- `--outfile, -o`: Specifies the file to store information about found secrets +- `--format, -f`: Specifies the output format, csv and json are supported. Defaults to csv +- `--baseline, -b`: Specifies the file with previous scan results. Only new secrets will be reported +- `--limit, -l`: Specifies the maximum number of secrets to be reported. The scan will stop when the limit is reached +- `--commit-limit`: Specifies the maximum number of commits to be scanned. The scan will stop when the limit is reached +- `--index-file`: Specifies the index file path to use in order to determine which risks are Vaulted +- `--disable-ui`: Specifies that the scan summary should not be logged to stdout +- `--skip-activeness`: If specified, skips activeness checks + +### Scanning a repo + +Automatically clones and scans all commits available in a repo +and uploads the results to HCP. + +```shell-session +$ vault-radar scan repo -u +``` + +### Scanning an existing clone + +Scan an existing repo (clone) and write the results to a file in CSV format + +```shell-session +$ vault-radar scan repo -c -o .csv +``` + + + +The difference between automatically cloning a repo and above is that +with existing clone only commits available in the clone will be scanned, +not all the repo reachable commits. The clone might have much fewer commits +than the repo itself, e.g. if the clone is a shallow clone or +if only a single branch was cloned. To scan all the reachable commits it is +recommended to scan repo using the `-u` parameter. + + + + +### Scanning an existing clone and output in JSON + +Scan a repo (clone) and write the results to a file in [JSON +Lines](https://jsonlines.org/) format. + +```shell-session +$ vault-radar scan repo -c -o .jsonl -f json +``` + +### HCP connection scanning behavior + +The scan commands require an HCP cloud connection to ensure +that hashes are generated using a shared salt from the +cloud keeping consistency across scans. In order to populate the HCP connection +information needed, refer to the [HCP +upload](/hcp/docs/vault-radar/cli/configuration/upload-results-to-hcp) page. + + + +To properly attribute found risks to git_reference (branch) where risks were +introduced, it is expected that the default branch is checked out or the clone +is a bare clone. If non-default branch is checked out most risks will be +attributed to that branch instead. + + + + + + +Only risks that are still on tip of that branch will be reported. + + + +### Scanning using a baseline file + +Perform a scan using a previous scan's result and write the new changes to an +outfile. With `-b` option, only new risks, risks that were not found in the +previous scan will be reported. + +```shell-session +$ vault-radar scan repo -u -b .csv \ + -o .csv +``` + + + +It is expected that previous and current scans are "similar", e.g. both either +clone or repo scans, with or without history, etc. + + + +### Scanning using a Vault index file + +Perform a scan using a generated vault index and upload results to HCP. +In this mode, if a risk was previously found in Vault, the scan results +will report the location in Vault as well. + +```shell-session +$ vault-radar scan repo -u \ + --index-file .jsonl +``` + +[How to generate a Vault Index](/hcp/docs/vault-radar/cli/index/vault#index-generation) + +### Scan and restrict the number of secrets found + +Stop scanning the repo when a defined number of secrets are found. + +```shell-session +$ vault-radar scan repo -u -l +``` + +### Scan and restrict the number of commits scanned + +Stop scanning the repo when a defined number of commits are scanned. + +```shell-session +$ vault-radar scan repo -u --commit-limit +``` diff --git a/content/hcp-docs/content/docs/vault-radar/cli/scan/slack.mdx b/content/hcp-docs/content/docs/vault-radar/cli/scan/slack.mdx new file mode 100644 index 0000000000..09b88805fc --- /dev/null +++ b/content/hcp-docs/content/docs/vault-radar/cli/scan/slack.mdx @@ -0,0 +1,143 @@ +--- +page_title: scan slack command +description: |- + The `scan slack` command is used for scanning Slack channel(s) and identifying messages that contain sensitive secrets. +--- + +# scan slack + +@include 'beta-feature.mdx' + +@include 'vault-radar/version-requirement.mdx' + +The `scan slack` command is used for scanning Slack channel(s) and identifying +messages that contain sensitive secrets. + +## Authentication + +The `scan slack` needs some authentication credentials in order to be able to +make requests to Slack. Follow the steps below to generate a User OAuth token +and specify the environment variable SLACK_USER_TOKEN to scan Slack channels. + +- [Create a Slack app](https://api.slack.com/start/quickstart#creating) +- [Request scopes](https://api.slack.com/start/quickstart#scopes) Within **OAuth +& Permissions** section, scroll down to **Scopes** section. Under **User Token +Scopes**, add the following scopes: + - `channels:history` + - `channels:read` + - `groups:history` + - `groups:read` + - `im:history` + - `im:read` + - `mpim:history` + - `mpim:read` + - `users:read` + - `users:read.email` + + +Use **User Token Scopes** and not **Bot Token Scopes**. + + +- [Install and Authorize app](https://api.slack.com/start/quickstart#installing) +- Within **OAuth & Permissions** section, scroll down to **OAuth Tokens for Your + Workspace** section, copy the value for **User OAuth Token** + +## Usage + + + +```plaintext +Usage: vault-radar scan slack [options] +``` + + + +### Command options + +- `--outfile, -o`: Specifies the file to store information about found secrets (required) +- `--format, -f`: Specifies the output format, csv and json are supported. Defaults to csv +- `--baseline, -b`: Specifies the file with previous scan results. Only new secrets will be reported. +- `--limit, -l`: Specifies the maximum number of secrets to be reported. The scan will stop when the limit is reached +- `--url, -u`: Specifies the slack base API path to scan (required) +- `--dm, -d`: Specifies the Slack dm to scan +- `--channel, -c`: Specifies the Slack channel to scan +- `--app, -a`: Specifies the Slack app to scan +- `--index-file`: Specifies the index file path to use in order to determine which risks are Vaulted +- `--disable-ui`: Specifies that the scan summary should not be logged to stdout +- `--skip-activeness`: If specified, skips activeness checks + + +## Examples + +The following examples all assume you have already set the appropriate +environment variable or that you intend to include them as part of the command +you run. + +### Scanning messages in all accessible channels + +Scan all public and private channels accessible by the Slack app (associated +with SLACK_USER_TOKEN) and write the results to a file in CSV format, this is +the default format for output. Default behaviour is to scan messages added in +the last day. + +```shell-session +$ vault-radar scan slack -o .csv +``` + +### Scanning messages added in the recent past + +Scan messages added in the last `