diff --git a/src/current/_includes/releases/v19.1/v2.2.0-alpha.20181119.md b/src/current/_includes/releases/v19.1/v2.2.0-alpha.20181119.md
index f233c86beef..c0a6f715b5e 100644
--- a/src/current/_includes/releases/v19.1/v2.2.0-alpha.20181119.md
+++ b/src/current/_includes/releases/v19.1/v2.2.0-alpha.20181119.md
@@ -24,7 +24,7 @@ Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
- In cases such as `'2018-01-31'::TIMESTAMP + '1 month'`, where an intermediate result of February 31st needs to be normalized, previous versions of CockroachDB would advance to March 3. Instead, CockroachDB now "rounds down" to February 28th to agree with the values returned by PostgreSQL. This change also affects the results of the `generate_sequence()` function when used with timestamps. [#31146][#31146]
- Updated the output of [`SHOW ZONE CONFIGURATIONS`](https://www.cockroachlabs.com/docs/v19.1/show-zone-configurations). Also, unset fields in [zone configurations](https://www.cockroachlabs.com/docs/v19.1/configure-replication-zones) now inherit parent values. [#30611][#30611] {% comment %}doc{% endcomment %}
- If [diagnostics reporting](https://www.cockroachlabs.com/docs/v19.1/diagnostics-reporting) is enabled, attempts to use `CREATE/DROP SCHEMA`, `DEFERRABLE`, `CREATE TABLE (LIKE ...)`, and `CREATE TABLE ... WITH` are now collected as telemetry to gauge demand for these currently unsupported features. [#31635][#31635] {% comment %}doc{% endcomment %}
-- If [diagnostics reporting](https://www.cockroachlabs.com/docs/v19.1/diagnostics-reporting) is enabled, the name of SQL [built-in functions](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators) are now collected upon evaluation errors. [#31677][#31677] {% comment %}doc{% endcomment %}
+- If [diagnostics reporting](https://www.cockroachlabs.com/docs/v19.1/diagnostics-reporting) is enabled, the name of SQL [built-in functions](https://www.cockroachlabs.com/docs/stable/functions-and-operators) are now collected upon evaluation errors. [#31677][#31677] {% comment %}doc{% endcomment %}
- If [diagnostics reporting](https://www.cockroachlabs.com/docs/v19.1/diagnostics-reporting) is enabled, attempts by client apps to use the unsupported "fetch limit" parameter (e.g., via JDBC) are now collected as telemetry to gauge support for this feature. [#31637][#31637]
- The [`IMPORT format (file)`](https://www.cockroachlabs.com/docs/v19.1/import) syntax is deprecated in favor of `IMPORT format file`. Similarly, `IMPORT TABLE ... FROM format (file)` is deprecated in favor of `IMPORT TABLE ... FROM format file`. [#31263][#31263] {% comment %}doc{% endcomment %}
- For compatibility with PostgreSQL, it is once again possible to use the keywords `FAMILY`, `MINVALUE`, `MAXVALUE`, `INDEX`, and `NOTHING` as table names, and the names "index" and "nothing" are once again accepted in the right-hand side of `SET` statement assignments. [#31731][#31731] {% comment %}doc{% endcomment %}
@@ -79,7 +79,7 @@ Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
Performance improvements
-- Improved the performance of [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v2.1/as-of-system-time) queries by letting them use the table descriptor cache. [#31716][#31716]
+- Improved the performance of [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/stable/as-of-system-time) queries by letting them use the table descriptor cache. [#31716][#31716]
- Within a transaction, when performing a schema change after the table descriptor has been modified, accessing the descriptor should be faster. [#30934][#30934]
- Improved the performance of index data deletion. [#31326][#31326]
- The [cost-based optimizer](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer) can now determine more keys in certain cases involving unique indexes, potentially resulting in better plans. [#31662][#31662]
diff --git a/src/current/_includes/releases/v19.1/v2.2.0-alpha.20190211.md b/src/current/_includes/releases/v19.1/v2.2.0-alpha.20190211.md
index c917ca2da61..8b338c52638 100644
--- a/src/current/_includes/releases/v19.1/v2.2.0-alpha.20190211.md
+++ b/src/current/_includes/releases/v19.1/v2.2.0-alpha.20190211.md
@@ -97,7 +97,7 @@ In addition to SQL language enhancements, general usability improvements, perfor
Doc updates
-- The new [Life of a Distributed Transaction](https://www.cockroachlabs.com/docs/v2.1/architecture/life-of-a-distributed-transaction) details the path that a query takes through CockroachDB's architecture, starting with a SQL client and progressing all the way to RocksDB (and then back out again). [#4281](https://github.com/cockroachdb/docs/pull/4281)
+- The new [Life of a Distributed Transaction](https://www.cockroachlabs.com/docs/stable/architecture/life-of-a-distributed-transaction) details the path that a query takes through CockroachDB's architecture, starting with a SQL client and progressing all the way to RocksDB (and then back out again). [#4281](https://github.com/cockroachdb/docs/pull/4281)
- Added a [warning about cross-store rebalancing](https://www.cockroachlabs.com/docs/v19.1/start-a-node#store) not working as expected in 3-node clusters with multiple stores per node. [#4320](https://github.com/cockroachdb/docs/pull/4320)
- Updated the [`INT`](https://www.cockroachlabs.com/docs/v19.1/int) documentation to include examples of actual min/max integers supported by each type for easier reference. Also added a description of possible compatibility issues caused by 64-bit integers vs., for example, JavaScript runtimes. [#4317](https://github.com/cockroachdb/docs/pull/4317)
- Documented the `sql.metrics.statement_details.plan_collection.period` [cluster setting](https://www.cockroachlabs.com/docs/v19.1/cluster-settings), which controls how often the logical plan for a fingerprint is sampled (5 minutes by default) on the [**Statements**](https://www.cockroachlabs.com/docs/v19.1/admin-ui-statements-page) page of the Admin UI. [#4316](https://github.com/cockroachdb/docs/pull/4316)
diff --git a/src/current/_includes/releases/v2.0/v2.0.3.md b/src/current/_includes/releases/v2.0/v2.0.3.md
index fcbde975db4..ca3a5d9af09 100644
--- a/src/current/_includes/releases/v2.0/v2.0.3.md
+++ b/src/current/_includes/releases/v2.0/v2.0.3.md
@@ -5,7 +5,7 @@ Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
General Changes
- The new `compactor.threshold_bytes` and `max_record_age` [cluster settings](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) can be used to configure the compactor. [#25458][#25458]
-- The new `cluster.preserve_downgrade_option` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) makes it possible to preserve the option to downgrade after [performing a rolling upgrade to v2.1](https://www.cockroachlabs.com/docs/v2.1/upgrade-cockroach-version). [#25811][#25811]
+- The new `cluster.preserve_downgrade_option` [cluster setting](https://www.cockroachlabs.com/docs/v2.0/cluster-settings) makes it possible to preserve the option to downgrade after [performing a rolling upgrade to v2.1](https://www.cockroachlabs.com/docs/stable/upgrade-cockroach-version). [#25811][#25811]
-
-- Prevent execution errors reporting a missing `libtinfo.so.5` on Linux systems. [#24513][#24513]
-- A CockroachDB process will now flush its logs upon receiving `SIGHUP`.
-- Added a [cluster setting](https://www.cockroachlabs.com/docs/v2.1/cluster-settings) for HLC to be monotonic across restarts [#23744][#23744] {% comment %}doc{% endcomment %}
-- Added a [cluster setting](https://www.cockroachlabs.com/docs/v2.1/cluster-settings) for HLC to panic on clock jumps. [#23717][#23717] {% comment %}doc{% endcomment %}
-- Statistics on the types of errors encountered are now included in [diagnostics reporting](https://www.cockroachlabs.com/docs/v2.1/diagnostics-reporting). [#22912][#22912] {% comment %}doc{% endcomment %}
-
-
Enterprise Edition Changes
-
-- It is now possible to [`RESTORE`](https://www.cockroachlabs.com/docs/v2.1/restore) views when using the `into_db` option. [#24555][#24555] {% comment %}doc{% endcomment %}
-- Relaxed the limitation on using [`BACKUP`](https://www.cockroachlabs.com/docs/v2.1/backup) in a mixed version cluster. [#24493][#24493]
-- `BACKUP` and `RESTORE` are more resilient to non-URL-safe characters in query string authentication parameters. [#24300][#24300]
-- The new `jobs.registry.leniency` [cluster setting](https://www.cockroachlabs.com/docs/v2.1/cluster-settings) can be used to allow long-running import jobs to survive temporary node saturation. [#23913][#23913] {% comment %}doc{% endcomment %}
-- Added configurable limits on the number of `BACKUP`/`RESTORE` requests each store will process. [#23517][#23517] {% comment %}doc{% endcomment %}
-
-
SQL Language Changes
-
-- Added configurable limits on the number of [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import) requests each store will process. [#23517][#23517] {% comment %}doc{% endcomment %}
-- The new `ALTER TABLE ... INJECT STATS` command injects table statistics from a JSON object (which can be obtained via `SHOW HISTOGRAM USING JSON`). [#24488][#24488] {% comment %}doc{% endcomment %}
-- The new `SHOW STATISTICS USING JSON` variant of `SHOW STATISTICS` outputs table statistics as a JSON object; it can be used to extract statistics from clusters to reproduce issues. [#24488][#24488] {% comment %}doc{% endcomment %}
-- `VIRTUAL` and `WORK` are no longer reserved keywords and can again be used as unrestricted names. [#24491][#24491] {% comment %}doc{% endcomment %}
-- Added the `CANCEL SESSION` statement as well as an `IF EXISTS` variant to `CANCEL QUERY`. [#23861][#23861] {% comment %}doc{% endcomment %}
-- Added a new `session_id` column to the result of `SHOW SESSIONS`. [#23861][#23861] {% comment %}doc{% endcomment %}
-- Added support for the `information_schema.pg_expandarray()` function. [#24422][#24422]
-- `DROP DATABASE` and `ALTER DATABASE ... RENAME` now prevent the removal of a database name if that database is set as the current database (`SET database` / `USE`) and the session setting `sql_safe_updates` is also set. [#24246][#24246] {% comment %}doc{% endcomment %}
-- Added support for naming array types via the `_type` form and quoted type names. [#24190][#24190]
-- Added the `quote_ident()` built-in function for increased PostgreSQL compatibility. [#24190][#24190]
-- The behavior of `UPSERT` and `INSERT ... ON CONFLICT` when a `RETURNING` clause is present is now more consistent when an update touches the same row twice or more. This is a CockroachDB-specific extension. [#23698][#23698]
-- Added the `statement_timeout` session variable. [#23399][#23399] {% comment %}doc{% endcomment %}
-- The type determined for constant `NULL` expressions has been renamed to "unknown" for better compatibility with PostgreSQL. [#23142][#23142]
-- Attempts to modify virtual schemas with DDL statements now fail with a clearer error message. [#23044][#23044]
-- CockroachDB now recognizes the special syntax `SET SCHEMA ` as an alias for `SET search_path = ` for better compatibility with PostgreSQL. [#22997][#22997] {% comment %}doc{% endcomment %}
-- Added support for `pg_sleep()` function. [#22804][#22804]
-- Division by zero now returns the correct error code. [#22912][#22912]
-- The GC of table data after a `DROP TABLE` now respects changes to the GC TTL interval specified in the relevant [replication zone](https://www.cockroachlabs.com/docs/v2.1/configure-replication-zones). [#22774][#22774]
-- The full names of tables/views/sequences are now properly logged in the system event log. [#22842][#22842]
-- `current_role` is now recognized as an alias for `current_user` for better compatibility with PostgreSQL. [#22828][#22828]
-- The special identifier `current_catalog` is now supported as an alias for `current_database()` for better compatibility with PostgreSQL. [#22828][#22828]
-- Added the `skip` option to the `IMPORT` command. [#23466][#23466] {% comment %}doc{% endcomment %}
-- The service latency tracked for SQL statements now includes the wait time of the execute message in the input queue. [#22880][#22880]
-
-
Command-Line Changes
-
-- The `cockroach gen autocomplete` command can now generate zsh-compatible completion files by passing `zsh` as an argument. [#24400][#24400] {% comment %}doc{% endcomment %}
-- Removed the `cockroach load csv` subcommand. [#24319][#24319]
-- When `cockroach gen haproxy` is run, if an `haproxy.cfg` file already exists in the current directory, it now gets fully overwritten instead of potentially resulting in an unusable config. [#24332][#24332] {% comment %}doc{% endcomment %}
-- The new `cockroach demo` command opens a SQL shell connected to a fully in-memory store, and an empty database named `demo`. It's useful for users or developers who wish to test out Cockroach's SQL dialect. [#24259][#24259] {% comment %}doc{% endcomment %}
-- Replication zones now allow for specifying an ordered list of lease placement preferences. Whenever possible, CockroachDB will attempt to put the lease for a range on a store that satisfies the first set of constraints. If that's not possible, it'll attempt to put the lease on a store that satisfies the second set of constraints, and so on. If none of the preferences can be met, the lease will be placed as it is today. [#23202][#23202]
-- Bracketed pastes are requested from the terminal emulator when possible. Pasting text into the interactive SQL shell is more reliable as a result. [#23116][#23116]
-- The `cockroach sql` command now reminds you about `SET database = ...` and `CREATE DATABASE` if started with no current database. [#23077][#23077]
-- Per-replica constraints in replication zones no longer have to add up to the total number of replicas in a range. If all replicas aren't specified, then the remaining replicas will be allowed on any store. [#23057][#23057] {% comment %}doc{% endcomment %}
-
-
Admin UI Changes
-
-- Removed explicit back links on **Events** and **Nodes** pages. [#23904][#23904]
-- Added a new debug page to display all [cluster settings](https://www.cockroachlabs.com/docs/v2.1/cluster-settings). [#24064][#24064]
-- While the **Logs** page loads, a spinner is now shown instead of a "no data" message. [#23496][#23496]
-- The **Logs** page now uses a monospaced font, properly renders newlines, and packs lines together more tightly. [#23496][#23496]
-- The **Node Map** now shows how long a node has been dead. [#23255][#23255] {% comment %}doc{% endcomment %}
-
-
Bug Fixes
-
-- Fixed a bug when using fractional units (e.g., `0.5GiB`) for the `--cache` and `--sql-max-memory` flags of `cockroach start`. [#24381][#24381]
-- Fixed the handling of role membership lookups within transactions. [#24284][#24284]
-- Fixed a panic around inverted index queries using the `->` operator. [#24576][#24576]
-- `JSONB` values can now be cast to `STRING` values. [#24518][#24518]
-- Fixed a panic caused by a `WHERE` condition that requires a column to equal a specific value and at the same time equal another column. [#24506][#24506]
-- Fixed panics resulting from distributed execution of queries with OID types. [#24431][#24431]
-- Fixed a bug where an expected transaction heartbeat failure aborted the transaction. [#24134][#24134]
-- Fixed a bug causing index backfills to fail in a loop after exceeding the GC TTL of their source table. [#24293][#24293]
-- Inverted index queries involving `NULL` are now properly handled. [#24251][#24251]
-- Fixed a bug involving Npgsql and array values. [#24227][#24227]
-- Fixed a panic caused by passing a `Name` type to `has_database_privilege()`. [#24252][#24252]
-- On-disk checksums are now correctly generated during `IMPORT`. If there is existing data created by `IMPORT` that cannot be recreated, use `cockroach dump` to rewrite any affected tables. [#24128][#24128]
-- Attempts to `RESTORE` to a time later than that covered by the latest `BACKUP` are now rejected. [#23727][#23727]
-- Fixed a bug that could prevent disk space from being reclaimed. [#23136][#23136]
-- Replication zone configs no longer accept negative numbers as input. [#22870][#22870]
-- Fixed the occasional selection of sub-optimal rebalance targets. [#23036][#23036]
-- The `cockroach dump` command is now able to dump sequences with non-default parameters. [#23051][#23051] {% comment %}doc{% endcomment %}
-- `SHOW TABLES` is once again able to inspect virtual schemas. [#22994][#22994] {% comment %}doc{% endcomment %}
-- The `CREATE TABLE .. AS` statement now properly supports placeholders in the subquery. [#23006][#23006]
-- Fixed a bug where ranges could get stuck in an infinite "removal pending" state and would refuse to accept new writes. [#22916][#22916]
-- Arrays now support `IS [NOT] DISTINCT FROM` operators. [#23005][#23005]
-- Fixed incorrect index constraints on primary key columns on unique indexes. [#22977][#22977]
-- Fixed a bug that prevented joins on interleaved tables with certain layouts from working. [#22920][#22920]
-- The conversion from `INTERVAL` to `FLOAT` now properly returns the number of seconds in the interval. [#22892][#22892]
-- Fix a panic cause sometimes by Flush protocol messages. [#24119][#24119]
-
-
Performance Improvements
-
-- Deleting many rows at once now consumes less memory. [#22991][#22991]
-- Fewer disk writes are required for each database write, increasing write throughput and reducing
- write latency. [#22317][#22317]
-- Reduced the amount of memory used during garbage collection of old versions. [#24209][#24209]
-- Greatly improved the performance of the `DISTINCT` operator when its inputs are known to be sorted. [#24438][#24438] [#24148][#24148]
-- Write requests that result in no-ops are no longer proposed through Raft. [#24345][#24345]
-
-
Build Changes
-
-- Release binaries are now built with enough debug information to produce useful CPU profiles and backtraces. [#24296][#24296]
-
-
-
-
Contributors
-
-This release includes 732 merged PRs by 38 authors. We would like to thank the following contributors from the CockroachDB community:
-
-- Mahmoud Al-Qudsi
-- Vijay Karthik (first-time contributor)
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-This release includes usability enhancements, PostgreSQL compatibility improvements, and general bug fixes.
-
-{{site.data.alerts.callout_success}}The EXPORT CSV feature in this release allows you to quickly get data out of CockroachDB and into a format that can be ingested by downstream systems. Unlike the existing ability to export data via a SELECT that outputs to a CSV file, EXPORT uses all nodes in the cluster to parallelize CSV creation for significantly faster processing. Note that this is a prototype feature, so we’d love for you to try it out and create issues if you’d like any enhancements or find any bugs.{{site.data.alerts.end}}
-
-
General Changes
-
-- Added a `/_status/diagnostics/{node_id}` debug endpoint, which returns an [anonymized diagnostics report](https://www.cockroachlabs.com/docs/v2.1/diagnostics-reporting). [#24813][#24813] [#24997][#24997]{% comment %}doc{% endcomment %}
-- The header of new log files generated by [`cockroach start`](https://www.cockroachlabs.com/docs/v2.1/start-a-node) now includes the cluster ID once it has been determined. [#24926][#24926]
-- The cluster ID is now reported with tag `[config]` in the first log file, not only when log files are rotated. [#24993][#24993]
-- Stopped spamming the server logs with "error closing gzip response writer" messages. [#25106][#25106]
-- Enforced stricter validation of [security certificates](https://www.cockroachlabs.com/docs/v2.1/create-security-certificates). [#24687][#24687]
-
-
Enterprise Edition Changes
-
-- Added alpha support for [`EXPORT CSV`](https://www.cockroachlabs.com/docs/v2.1/export). [#25075][#25075]
-
-
SQL Language Changes
-
-- `ROLE` and `STORED` are no longer reserved keywords and can again be used as unrestricted names for databases, tables and columns. [#24629][#24629] [#24554][#24554] {% comment %}doc{% endcomment %}
-- The experimental SQL features `SHOW TESTING_RANGES` and `ALTER ... TESTING_RELOCATE` have been renamed `SHOW EXPERIMENTAL_RANGES` and `ALTER ... EXPERIMENTAL_RELOCATE`. [#24696][#24696] {% comment %}doc{% endcomment %}
-- The `current_schema()` and `current_schemas()` built-in functions now only consider valid schemas, like PostgreSQL does. [#24718][#24718]
-- Clarified the error message produced upon accessing a virtual schema with no database prefix (e.g., when `database` is not set). [#24772][#24772]
-- Improved the error message returned on object creation when no current database is set or only invalid schemas are in the `search_path`. [#24770][#24770]
-- Added more ways to specify an index name for statements that require one (e.g., [`DROP INDEX`](https://www.cockroachlabs.com/docs/v2.1/drop-index), [`ALTER INDEX ... RENAME`](https://www.cockroachlabs.com/docs/v2.1/rename-index), etc.), improving PostgreSQL compatibility. [#24778][#24778] {% comment %}doc{% endcomment %}
-- Errors detected by `SHOW SYNTAX` are now tracked internally like other SQL errors. [#24819][#24819]
-- Added support for `lpad()` and `rpad()` string functions. [#24891][#24891] {% comment %}doc{% endcomment %}
-- [Computed columns](https://www.cockroachlabs.com/docs/v2.1/computed-columns) can now be added with [`ALTER TABLE ... ADD COLUMN`](https://www.cockroachlabs.com/docs/v2.1/add-column). [#24464][#24464] {% comment %}doc{% endcomment %}
-- Added the `TIMETZ` column type and datum. [#24343][#24343]
-- The [`EXPLAIN`](https://www.cockroachlabs.com/docs/v2.1/explain) output for [`UPDATE`](https://www.cockroachlabs.com/docs/v2.1/update) statements now also report [`CHECK`](https://www.cockroachlabs.com/docs/v2.1/check) expressions, for consistency with `INSERT`. [#23373][#23373] {% comment %}doc{% endcomment %}
-- Reduced the amount of RAM used when a query performs further computations on the result of a mutation statement (`INSERT`/`DELETE`/`UPSERT`/`UPSERT`) combined with `RETURNING`. [#23373][#23373]
-- Added the 'base64' option to the `encode()` and `decode()` built-in functions. [#25002][#25002] {% comment %}doc{% endcomment %}
-- [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import) now supports hex-encoded byte literals for [`BYTES`](https://www.cockroachlabs.com/docs/v2.1/bytes) columns. [#24859][#24859]
-- Set Returning Functions (SRF) can now be accessed using `(SRF).x`, where `x` is the name of a column returned from the SRF or a `*`. For example, `SELECT (information_schema._pg_expandarray(ARRAY['c', 'b', 'a'])).x` and `SELECT (information_schema._pg_expandarray(ARRAY['c', 'b', 'a'])).*` are now both valid. Also, the naming of the resulting columns from SRFs now provide more information about the resulting tuple. [#24832][#24832] {% comment %}doc{% endcomment %}
-- Removed the `METADATA`, `QUALIFY`, and `EXPRS` options for [`EXPLAIN`](https://www.cockroachlabs.com/docs/v2.1/explain). [#25101][#25101] {% comment %}doc{% endcomment %}
-- CockroachDB now properly reports an error when a query attempts to use `ORDER BY` within a function argument list, which is an unsupported feature. [#25146][#25146]
-- `AS OF SYSTEM TIME` queries now accept a negative interval to produce a relative time from the statement's `statement_timestamp()` time. [#24768][#24768]
-
-
Command-Line Changes
-
-- The `cockroach demo` command now shows the Admin UI URL on startup. [#24738][#24738]
-- [`cockroach dump`](https://www.cockroachlabs.com/docs/v2.1/sql-dump) now supports self and cyclic foreign key references. [#24716][#24716] {% comment %}doc{% endcomment %}
-
-
Admin UI Changes
-
-- Updated the window title for each page to make browser history more useful. [#24634][#24634]
-- Changed the label "bytes" to "used capacity" in the **Nodes** table. [#24843][#24843]
-- The names of dropped schema objects on `DROP DATABASE ... CASCADE` are now displayed. [#24852][#24852]
-- The **Cluster Overview** now adjusts to take the advantage of the size of the screen. [#24849][#24849]
-- Fixed a bug where the **Node List** could get cut off. [#24849][#24849]
-- Improve responsiveness of the Prometheus metrics endpoint on very overloaded nodes. [#25083][#25083]
-- Time series charts now display data points at more consistent timestamps. [#24856][#24856]
-
-
Bug Fixes
-
-- Converted a panic related to an unsupported type to an error. [#24688][#24688]
-- [`ALTER INDEX ... RENAME`](https://www.cockroachlabs.com/docs/v2.1/rename-index) can now be used on the primary index. [#24776][#24776] {% comment %}doc{% endcomment %}
-- Fixed a scenario in which a node could deadlock while starting up. [#24808][#24808]
-- It is once again possible to use a simply qualified table name in qualified stars (e.g., `SELECT mydb.kv.* FROM kv`) for compatibility with CockroachDB v1.x. [#24811][#24811] {% comment %}doc{% endcomment %}
-- Ranges in [partitioned tables](https://www.cockroachlabs.com/docs/v2.1/partitioning) now properly split to respect their configured maximum size. [#24896][#24896]
-- Fixed a bug where `SELECT * FROM [DELETE FROM ... RETURNING ...] LIMIT 1` or `WITH d AS (DELETE FROM ... RETURNING ...) SELECT * FROM d LIMIT 1` would fail to properly delete some rows. The is fixed for `SELECT * FROM [INSERT ... RETURNING ...] LIMIT 1` or `WITH d AS (INSERT ... RETURNING ...) SELECT * FROM d LIMIT 1` as well. [#23373][#23373]
-- Removed a limitation where `UPDATE`, `INSERT`, or `UPSERT` would fail if the number of modified rows was too large. [#23373][#23373] {% comment %}doc{% endcomment %}
-- [`UPSERT`](https://www.cockroachlabs.com/docs/v2.1/upsert) now properly reports the count of modified rows when `RETURNING` is not specified. [#23373][#23373]
-- When [adding a column](https://www.cockroachlabs.com/docs/v2.1/add-column), CockroachDB now verifies that the column is reference by no more than one foreign key. Existing tables with a column that is used by multiple foreign key constraints should be manually changed to have at most one foreign key per column. [#25060][#25060] {% comment %}doc{% endcomment %}
-- Corrected the documentation for the `age()` [built-in function](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators). [#25132][#25132] {% comment %}doc{% endcomment %}
-- Fixed a bug causing `PREPARE` to hang when run in the same transaction as a `CREATE TABLE` statement. [#23816][#23816]
-
-
Performance Improvements
-
-- Improved the performance of hash joins. [#24577][#24577]
-- Aggregations are now streamed based on the ordering on the `GROUP BY` columns. [#24113][#24113]
-- Some `SELECT`s with limits no longer require a second low-level scan, resulting in much faster execution. [#24790][#24790]
-
-
Build Changes
-
-- Build metadata, like the commit SHA and build time, is now properly injected into the binary when using Go 1.10 and building from a symlink. [#25008][#25008]
-
-
Doc Updates
-
-- Added a [performance tuning guide](https://www.cockroachlabs.com/docs/v2.1/kubernetes-performance) for running CockroachDB in Kubernetes. [#2896][#2896]
-- Clarified [replication zone levels](https://www.cockroachlabs.com/docs/v2.1/configure-replication-zones#replication-zone-levels) and added a warning about increasing the default replication factor without increasing the replication factor of system ranges. [#2901][#2901]
-- Updated the [`cockroach start`](https://www.cockroachlabs.com/docs/v2.1/start-a-node) documentation to recommend decimal notation for flags that accept percentages. [#3056][#3056]
-- Documented how to use the [`server.shutdown.drain_wait`](https://www.cockroachlabs.com/docs/v2.1/operational-faqs#how-do-i-prepare-for-planned-node-maintenance) cluster setting to prevent a load balancer from sending client traffic to a node about to be shut down. [#2903][#2903]
-- Documented the [`intervalstyle`](https://www.cockroachlabs.com/docs/v2.1/set-vars) session variable. [#2904][#2904]
-- Added documentation on the [`SHOW EXPERIMENTAL_RANGES`](https://www.cockroachlabs.com/docs/v2.1/show-experimental-ranges) statement. [#2930][#2930]
-- Update the [Node Map](https://www.cockroachlabs.com/docs/v2.1/enable-node-map#location-coordinates-for-reference) documentation to provide location coordinates for AWS, Azure, and Google Cloud. [#2942][#2942]
-- Updated the `SPLIT AT` documentation to show how to [split a table with a composite primary key](https://www.cockroachlabs.com/docs/v2.1/split-at#split-a-table-with-a-composite-primary-key). [#2950][#2950]
-- Documented the `--temp-dir` flag for [`cockroach start`](https://www.cockroachlabs.com/docs/v2.1/start-a-node). [#2955][#2955]
-- Expanded the [Production Checklist](https://www.cockroachlabs.com/docs/v2.1/recommended-production-settings) to recommend a higher replication factor when using local disks rather than a cloud providers' network-attached disks that are often replicated underneath the covers. [#3001][#3001]
-
-
-
-
Contributors
-
-This release includes 224 merged PRs by 37 authors. We would like to thank the following contributors from the CockroachDB community, with special thanks to first-time contributors Bob Potter, Karan Vaidya, dchenk, and phelanm.
-
-- Bob Potter
-- Brett Snyder
-- Jingguo Yao
-- Karan Vaidya
-- Vijay Karthik
-- dchenk
-- phelanm
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-This release includes general usability enhancements and bug fixes as well as:
-
-- [**Easier data migrations from MySQL**](https://www.cockroachlabs.com/docs/v2.1/migration-overview): The `IMPORT` feature now supports a subset of MySQL export formats. We will continue to make migrations easier in the alphas leading up to 2.1.
-
-Please give this feature and the ones below a try. If you see something that can be improved, we’d love to hear from you on [GitHub](https://github.com/cockroachdb/cockroach/issues) or the [Forum](https://forum.cockroachlabs.com/).
-
-
General Changes
-
-- New clusters and existing clusters [upgraded](https://www.cockroachlabs.com/docs/v2.1/upgrade-cockroach-version) to this version of CockroachDB will include two new empty databases, `defaultdb` and `postgres`. The `defaultdb` database is automatically used for clients that connect without a current database set (e.g., without a database component in the connection URL). The `postgres` database is provided for compatibility with PostgreSQL client frameworks that require it to exist when the database server has been freshly installed. Both new databases behave like any other regular database and, if deemed unnecessary, can be [manually deleted](https://www.cockroachlabs.com/docs/v2.1/drop-database). [#24735][#24735] {% comment %}doc{% endcomment %}
-- The new `compactor.threshold_bytes` and `max_record_age` [cluster settings](https://www.cockroachlabs.com/docs/v2.1/cluster-settings) can be used to configure the compactor. [#25397][#25397] {% comment %}doc{% endcomment %}
-- After [upgrading a cluster](https://www.cockroachlabs.com/docs/v2.1/upgrade-cockroach-version) from v2.0 to v2.1, it is no longer necessary to manually finalize the upgrade. [#24987][#24987]
-
-
SQL Language Changes
-
-- [Collated strings](https://www.cockroachlabs.com/docs/v2.1/collate) can now be used in `WHERE` clauses on indexed columns. [#25169][#25169]
-- The new `CANCEL QUERIES` and `CANCEL SESSIONS` variants of the [`CANCEL QUERY`](https://www.cockroachlabs.com/docs/v2.1/cancel-query) and [`CANCEL SESSION`](https://www.cockroachlabs.com/docs/v2.1/cancel-session) statements cancel multiple queries or sessions at once. Likewise, the new `CANCEL/PAUSE/RESUME JOBS` variants of the [`CANCEL JOB`](https://www.cockroachlabs.com/docs/v2.1/cancel-job), [`PAUSE JOB`](https://www.cockroachlabs.com/docs/v2.1/pause-job), and [`RESUME JOB`](https://www.cockroachlabs.com/docs/v2.1/resume-job) statements operate on multiple jobs at once. [#25157][#25157] {% comment %}doc{% endcomment %}
-- The `Level` and `Type` columns of [`EXPLAIN (VERBOSE)`](https://www.cockroachlabs.com/docs/v2.1/explain) results are now hidden; if they are needed, they can be `SELECT`ed explicitly. [#25172][#25172] {% comment %}doc{% endcomment %}
-- All users now automatically belong to the new `public` role. This role makes it possible to grant privileges on an object for all users at once, e.g., `GRANT SELECT ON mytable TO public;`. [#25099][#25099]
-- The binary Postgres wire format is now supported for [`INTERVAL`](https://www.cockroachlabs.com/docs/v2.1/interval) values. [#25242][#25242] {% comment %}doc{% endcomment %}
-- Prevented [`DROP TABLE`](https://www.cockroachlabs.com/docs/v2.1/drop-table) from using too much CPU. [#24983][#24983]
-- Improved [`SET TRACING`](https://www.cockroachlabs.com/docs/v2.1/set-vars#set-tracing) so that a client can more easily trace around a statement that produce errors. [#25262][#25262]
-- Added the `generate_subscripts()` [built-in function](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators). [#25295][#25295] {% comment %}doc{% endcomment %}
-- Improved the documentation of the `now()`, `current_time()`, `current_date()`, `current_timestamp()`,
-`clock_timestamp()`, `statement_timestamp()`, and `cluster_logical_timestamp()` [built-in functions](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators). [#25327][#25327] {% comment %}doc{% endcomment %}
-- Running [`TRUNCATE`](https://www.cockroachlabs.com/docs/v2.1/truncate) without `CASCADE` on a table that has interleaved table children now returns an error instead of proceeding to delete both tables. [#25265][#25265] {% comment %}doc{% endcomment %}
-- Tuples can now be labeled using the new grammar `((1,2,3) AS a,b,c)`. [#25283][#25283] {% comment %}doc{% endcomment %}
-- Labeled tuples can now be accessed using their labels, but doing so requires an extra level of parentheses, e.g., `SELECT (((1,'2',true) AS a, b, c)).a`. [#25810][#25810]
-- [`SHOW TRACE FOR `](https://www.cockroachlabs.com/docs/v2.1/) now runs `` through the [DistSQL](https://www.cockroachlabs.com/docs/v2.1/architecture/sql-layer#distsql) execution engine, if supported. `SHOW KV TRACE FOR ` still runs `` through local SQL. [#24709][#24709] {% comment %}doc{% endcomment %}
-- Introduced two experimental scalar operators, `IFERROR()` and `ISERROR()`. They may be documented for public use in the future. [#25304][#25304]
-- The `server.time_until_store_dead` [cluster setting](https://www.cockroachlabs.com/docs/v2.1/cluster-settings) can no longer be set to less than `1m15s`. Setting it to lower values was previously allowed but not safe, since it can cause bad rebalancing behavior. [#25598][#25598] {% comment %}doc{% endcomment %}
-- [`CANCEL JOB`](https://www.cockroachlabs.com/docs/v2.1/cancel-job) can now be executed on long-running schema change jobs, causing them to terminate early and roll back. [#25571][#25571] {% comment %}doc{% endcomment %}
-- Added the `array_to_string()` [built-in function](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators). [#25681][#25681] {% comment %}doc{% endcomment %}
-- [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import) now supports MySQL's tabbed `OUTFILE` format. [#25615][#25615] {% comment %}doc{% endcomment %}
-- [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import) now supports `mysqldump` SQL as a data format. [#25783][#25783] {% comment %}doc{% endcomment %}
-- The experimental lookup join feature now supports secondary indexes. [#25628][#25628] {% comment %}doc{% endcomment %}
-- Stored, computed columns can now be converted to regular columns by running `ALTER TABLE t ALTER COLUMN c DROP STORED`. [#25819][#25819] {% comment %}doc{% endcomment %}
-- The experimental lookup join feature now supports left outer joins. [#25644][#25644] {% comment %}doc{% endcomment %}
-- [`TRUNCATE`](https://www.cockroachlabs.com/docs/v2.1/truncate) commands are now logged in the event log. [#25868][#25868]
-- Improved [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import) error messages. [#26032][#26032]
-
-
Command-Line Changes
-
-- Changing or removing a [replication zone](https://www.cockroachlabs.com/docs/v2.1/configure-replication-zones) config now causes events to be written to the system event log. [#25250][#25250]
-- Messages that refer to an invoked command (e.g., "Failed running start") are no longer confused by the presence of flags before the first argument (e.g., `cockroach --no-color start`). [#25246][#25246]
-- Typos in [replication zone constraints](https://www.cockroachlabs.com/docs/v2.1/configure-replication-zones#replication-constraints) are now validated. When they are set, required attributes and localities must match at least one node in the cluster. [#25421][#25421] {% comment %}doc{% endcomment %}
-
-
Admin UI Changes
-
-- Running unit tests for the Admin UI now depends on the installation of Google Chrome. [#25140][#25140]
-- Added RocksDB compactions/flushes to Storage graphs. [#25428][#25428]
-- Added a Stores report page, including encryption status. [#26040][#26040]
-- Removed time selectors and tier labels during [Node Map](https://www.cockroachlabs.com/docs/v2.1/enable-node-map) setup. [#25280][#25280]
-
-
Bug Fixes
-
-- The [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.1/use-the-built-in-sql-client) command once again does not prompt for a password when a certificate is provided. [#25252][#25252]
-- Corrected the behavior of `GREATEST` and `LEAST` built-ins when they have a leading `NULL` argument. [#25882][#25882]
-- CockroachDB now properly reports an error when using the internal-only functions `final_variance()` and `final_stddev()` instead of causing a crash. [#25158][#25158]
-- The `constraint_schema` column in `information_schema.constraint_column_usage` now displays the constraint's schema instead of its catalog. [#25190][#25190] {% comment %}doc{% endcomment %}
-- `BEGIN; RELEASE SAVEPOINT;` now returns and error instead of causing a crash. [#25247][#25247]
-- Fix a bug where the sessions endpoint on the Admin UI would return an error when there was an active transaction. [#25249][#25249]
-- Corrected the CockroachDB-specific, currently undocumented conversion from [`INTERVAL`](https://www.cockroachlabs.com/docs/v2.1/interval) to/from numeric types. [#25257][#25257]
-- Fixed problems with [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import) sometimes failing after node decommissioning. [#25162][#25162]
-- Prevented queries that use placeholders for tuple types from causing a crash. [#25269][#25269]
-- Fixed a rare `segfault` that occurred when reading from an invalid memory location returned from C++. [#25347][#25347]
-- Fixed a bug with `IS DISTINCT FROM` not returning `NULL` values that pass the condition in some cases. [#25336][#25336]
-- Restarting a CockroachDB server on Windows no longer fails due to file system locks in the store directory. [#25267][#25267]
-- Prevented the consistency checker from deadlocking. This would previously manifest itself as a steady number of replicas queued for consistency checking on one or more nodes and would resolve by restarting the affected nodes. [#25456][#25456]
-- Fixed a crash in some cases when using a `GROUP BY` with `HAVING`. [#25574][#25574]
-- Fixed a nil pointer dereference when importing data containing date values. [#25661][#25661]
-- Numeric literal values no longer silently lose information after a certain precision. [#25597][#25597]
-- Prevented spurious `BudgetExceededErrors` for some queries that read a lot of [`JSON`](https://www.cockroachlabs.com/docs/v2.1/jsonb) data from disk. [#25679][#25679]
-- Fixed query errors in some cases involving a `NULL` constant that is cast to a specific type. [#25735][#25735]
-- Fixed a crash when trying to plan certain `UNION ALL` queries. [#25747][#25747]
-- Fixed a crash caused by inserting data into a table with [computed columns](https://www.cockroachlabs.com/docs/v2.1/computed-columns) that reference other columns not present in the `INSERT` statement. [#25682][#25682]
-- `EXPLAIN (DISTSQL)` now properly reports that plans containing subqueries cannot be run through the [DistSQL execution engine](https://www.cockroachlabs.com/docs/v2.1/architecture/sql-layer#distsql). [#25618][#25618]
-- CockroachDB no longer crashes if the control statements `CANCEL`/`PAUSE`/`RESUME` are given values using special PostgreSQL types (e.g., `NAME`). [#25844][#25844]
-- Fixed a panic when using unordered aggregations. [#26042][#26042]
-- Fixed an error caused by [`INET`](https://www.cockroachlabs.com/docs/v2.1/inet) constants in some rare cases. [#26086][#26086]
-- Fixed an error caused by empty arrays in some cases. [#26090][#26090]
-- Previously, expired compactions could stay in the queue forever. Now, they are removed when they expire. [#26039][#26039]
-- Fixed problems using tables with [foreign key](https://www.cockroachlabs.com/docs/v2.1/foreign-key) or [interleaved](https://www.cockroachlabs.com/docs/v2.1/interleave-in-parent) references to other tables when the tables were created in the same transaction. [#25786][#25786]
-
-
Doc Updates
-
-- Documented [special syntax forms](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators#special-syntax-forms) of built-in SQL functions and [conditional and function-like operators](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators#conditional-and-function-like-operators), and updated the [SQL operator order of precedence](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators#operators). [#3192][#3192]
-- Added best practices on [understanding and avoiding transaction contention](https://www.cockroachlabs.com/docs/v2.1/performance-best-practices-overview#understanding-and-avoiding-transaction-contention) and a related [FAQ](https://www.cockroachlabs.com/docs/v2.1/operational-faqs#why-would-increasing-the-number-of-nodes-not-result-in-more-operations-per-second). [#3156][#3156]
-- Improved the documentation of [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v2.1/as-of-system-time). [#3155][#3155]
-- Expanded the [manual deployment](https://www.cockroachlabs.com/docs/v2.1/manual-deployment) guides to cover running a sample workload against a cluster. [#3149][#3149]
-- Documented the [`TIMETZ`](https://www.cockroachlabs.com/docs/v2.1/time) data type. [#3102][#3102]
-- Added FAQs on [generating unique, slowly increasing sequential numbers](https://www.cockroachlabs.com/docs/v2.1/sql-faqs#how-do-i-generate-unique-slowly-increasing-sequential-numbers-in-cockroachdb) and [the differences between `UUID`, sequences, and `unique_rowid()`](https://www.cockroachlabs.com/docs/v2.1/sql-faqs#what-are-the-differences-between-uuid-sequences-and-unique_rowid). [#3104][#3104]
-
-
-
-
Contributors
-
-This release includes 304 merged PRs by 38 authors. We would like to thank the following contributors from the CockroachDB community, with special thanks to first-time contributors Nishant Gupta, wabada, and yuzefovich.
-
-- Garvit Juniwal
-- Gustav Paul
-- Karan Vaidya
-- Nishant Gupta
-- Vijay Karthik
-- wabada
-- Yahor Yuzefovich
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-For our July 2nd alpha release, in addition to PostgreSQL compatibility enhancements, general usability improvements, and bug fixes, we want to highlight a few major benefits:
-
-- [**Get visibility into query performance with the Statements pages**](https://www.cockroachlabs.com/docs/v2.1/admin-ui-statements-page) - The Web UI can now surface statistics about queries along with visualizations to help identify application problems quickly.
-- [**Get up and running faster with `IMPORT MYSQLDUMP/PGDUMP`**](https://www.cockroachlabs.com/docs/v2.1/migration-overview) - It is now much easier to transfer existing databases to CockroachDB.
-- [**Improved data security with Encryption at Rest (enterprise)**](https://www.cockroachlabs.com/docs/v2.1/encryption) - With this enhancement, you can now encrypt your CockroachDB files on disk, rotate keys, and monitor encryption status without having to make changes to your application code.
-- [**Stream changes to Kafka with CDC (enterprise)**](https://www.cockroachlabs.com/docs/v2.1/change-data-capture) - CockroachDB can now stream changes into Apache Kafka to support downstream processing such as reporting, caching, or full-text indexing.
-- **Secure your Web UI with User Authentication** - A login page can now be enabled to control who can access the Web UI in secure clusters.
-
-Please give these features and the ones below a try. If you see something that can be improved, we’d love to hear from you on [GitHub](https://github.com/cockroachdb/cockroach/issues) or the [Forum](https://forum.cockroachlabs.com/).
-
-
Backward-incompatible changes
-
-- CockroachDB now uses a different algorithm to generate column names for complex expressions in [`SELECT`](https://www.cockroachlabs.com/docs/v2.1/select-clause) clauses when `AS` is not used. The results are more compatible with PostgreSQL but may appear different to client applications. This does not impact most uses of SQL, where the rendered expressions are sufficiently simple (simple function applications, reuses of existing columns) or when `AS` is used explicitly. [#26550][#26550]
-- The output columns for the statement [`SHOW CONSTRAINTS`](https://www.cockroachlabs.com/docs/v2.1/show-constraints) were changed. The previous interface was experimental; the new interface will now be considered stable. [#26478][#26478] {% comment %}doc{% endcomment %}
-
-
General changes
-
-- Metrics can now be sent to a Graphite endpoint specified using the `external.graphite.endpoint` [cluster setting](https://www.cockroachlabs.com/docs/v2.1/cluster-settings). The `external.graphite.interval` setting controls the interval at which this happens. [#25227][#25227]
-- Added a [config file and instructions](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-daemonset-secure.yaml) for running CockroachDB in secure mode in a Kubernetes DaemonSet. [#26816][#26816] {% comment %}doc{% endcomment %}
-
-
Enterprise edition changes
-
-- The new `SHOW BACKUP RANGES` and `SHOW BACKUP FILES` statements show details about the ranges and files, respectively, that comprise a backup. [#26450][#26450] {% comment %}doc{% endcomment %}
-
-
SQL language changes
-
-- If a computed column's expression results in an error, the name of the computed column is now added to the error returned to the user. This makes it easier for users to understand why an otherwise valid operation might fail. [#26054][#26054]
-- Implemented the minus operation between a JSON Object and a text array. [#26183][#26183] {% comment %}doc{% endcomment %}
-- Fixed some error messages to more closely match PostgreSQL error messages, including the corresponding PostgreSQL
- error codes. [#26290][#26290]
-- Added an empty `pg_stat_activity` virtual table for compatibility with DBeaver and other SQL clients that require it. [#26249][#26249]
-- The new `EXPLAIN (DISTSQL, ANALYZE)` statement annotates DistSQL execution plans with collected execution statistics. [#25849][#25849] {% comment %}doc{% endcomment %}
-- [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import) now supports the PostgreSQL `COPY` format. [#26334][#26334] {% comment %}doc{% endcomment %}
-- The output of [`SHOW SESSIONS`](https://www.cockroachlabs.com/docs/v2.1/show-sessions) now includes the number of currently allocated bytes by the session, and the maximum number of allocated bytes that the session ever owned at once. Note that these numbers do not include the bytes allocated for the session by remote nodes. [#25395][#25395] {% comment %}doc{% endcomment %}
-- The `bytea_output` [session variable](https://www.cockroachlabs.com/docs/v2.1/set-vars) now controls how byte arrays are converted to strings and reported back to clients, for compatibility with PostgreSQL. [#25835][#25835] {% comment %}doc{% endcomment %}
-- Added placeholder `information_schema.routines` and `information_schema.parameters` for compatibility with Navicat, PGAdmin, and other clients that require them. [#26327][#26327] {% comment %}doc{% endcomment %}
-- CockroachDB now recognizes aggregates in `ORDER BY` clauses even when there is no `GROUP BY` clause nor aggregation performed, for compatibility with PostgreSQL. [#26425][#26425] {% comment %}doc{% endcomment %}
-- Added the `pg_is_in_recovery()` [function](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators) for compatibility with PostgreSQL tools. [#26445][#26445] {% comment %}doc{% endcomment %}
-- CockroachDB now supports simple forms of PostgreSQL's `ROWS FROM(...)` syntax. [#26223][#26223] {% comment %}doc{% endcomment %}
-- CockroachDB now generates a simple column name when using an SRF that produces multiple columns. [#26223][#26223]
-- CockroachDB now properly handles some uses of multiple SRFs in the same `SELECT` clause in a way compatible with
- PostgreSQL. [#26223][#26223]
-- Added the `pg_is_xlog_replay_paused()` [function](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators) for compatibility with PostgreSQL tools. [#26462][#26462] {% comment %}doc{% endcomment %}
-- Added the `pg_catalog.pg_seclabel` and `pg_catalog.pg_shseclabel` tables for compatibility with Postgres tools. Note that we do not support adding security labels. [#26515][#26515]
-- CockroachDB now supports [`INSERT ... ON CONFLICT DO NOTHING`](https://www.cockroachlabs.com/docs/v2.1/insert) without any specified columns; on a conflict with any [`UNIQUE`](https://www.cockroachlabs.com/docs/v2.1/unique) column, the insert will not continue. [#26465][#26465] {% comment %}doc{% endcomment %}
-- CockroachDB now supports the `bit_length()`, `quote_ident()`, `quote_literal()`, and `quote_nullable()` [built-in functions](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators), and the aliases `char_length()` and `character_length()` for `length()`, for compatibility with PostgreSQL. [#26586][#26586] {% comment %}doc{% endcomment %}
-- If a function name is typed in with an invalid schema or invalid case, the error message now tries to provides a suggestion for alternate spelling. [#26588][#26588]
-- CockroachDB now can evaluate set-generating functions with arguments that refer to the `FROM` clause. In particular, this makes it possible to use functions like `json_each()` and `json_object_keys()` over [`JSONB`](https://www.cockroachlabs.com/docs/v2.1/jsonb) columns. [#26503][#26503] {% comment %}doc{% endcomment %}
-- Added prototype support for [`IMPORT ... MYSQLDUMP`](https://www.cockroachlabs.com/docs/v2.1/import), including the ability to import entire (multi-table) mysqldump files. [#26164][#26164] {% comment %}doc{% endcomment %}
-- [`CHECK`](https://www.cockroachlabs.com/docs/v2.1/check) constraints are now checked when updating a conflicting row in [`INSERT ... ON CONFLICT DO UPDATE`](https://www.cockroachlabs.com/docs/v2.1/insert) statements. [#26642][#26642] {% comment %}doc{% endcomment %}
-- Labeled tuples can now be accessed using their labels (e.g., `SELECT (x).word FROM (SELECT pg_expand_keywords() AS x)` or a star (e.g., `SELECT (x).* FROM (SELECT pg_expand_keywords() AS x)`). [#26628][#26628] {% comment %}doc{% endcomment %}
-- An error is now returned to the user instead of panicking when trying to add a column with a [`UNIQUE`](https://www.cockroachlabs.com/docs/v2.1/unique) constraint when that column's type is not indexable. [#26684][#26684] {% comment %}doc{% endcomment %}
-- Introduced the `sql.failure.count` metric, which counts the number of queries that result in an error. [#26731][#26731]
-- Added support for de-compressing [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import) files with gzip or bzip. [#26796][#26796] {% comment %}doc{% endcomment %}
-- Added initial support for `IMPORT` with pg_dump files. [#26740][#26740] {% comment %}doc{% endcomment %}
-- Added the `like_escape()`, `ilike_escape()`, `not_like_escape()`, `not_ilike_escape()`, `similar_escape()`, and `not_similar_escape()` [built-in functions](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators) for use when an optional `ESCAPE` clause is present. [#26176][#26176] {% comment %}doc{% endcomment %}
-- Added support for set-returning functions in distributed SQL execution. [#26739][#26739]
-- Added a cluster setting to enable the experimental cost-based optimizer. [#26299][#26299]
-- Added the `pg_catalog.pg_shdescription` table for compatibility with PostgreSQL tools. Note that CockroachDB does not support adding descriptions to shared database objects. [#26474][#26474]
-
-
Command-line changes
-
-- [`cockroach quit`](https://www.cockroachlabs.com/docs/v2.1/stop-a-node) now emits warning messages on its standard error stream, not standard output. [#26158][#26158] {% comment %}doc{% endcomment %}
-- [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.1/use-the-built-in-sql-client) now recognizes the values `on`, `off`, `0`, `1`, `true` and `false` to set client-side boolean parameters with `set`. [#26287][#26287] {% comment %}doc{% endcomment %}
-- [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.1/use-the-built-in-sql-client) now recognizes `set option=value` as an alias to `set option value`. [#26287][#26287] {% comment %}doc{% endcomment %}
-- `cockroach demo` now supports more options also supported by `cockroach sql`, including `--execute`, `--format`,
- `--echo-sql` and `--safe-updates`. [#26287][#26287] {% comment %}doc{% endcomment %}
-- `cockroach demo` includes the welcome messages also printed by `cockroach sql`. [#26287][#26287]
-- `cockroach demo` now uses the standard `defaultdb` database instead of creating its own `demo` database. [#26287][#26287] {% comment %}doc{% endcomment %}
-- [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.1/use-the-built-in-sql-client) and `cockroach demo` now accept `--set` to run `set` commands prior to starting the shell
- or running commands via `-e`. [#26287][#26287] {% comment %}doc{% endcomment %}
-
-
Admin UI changes
-
-- Authentication in the Admin UI can now be enabled for secure clusters by setting the environment variable `COCKROACH_EXPERIMENTAL_REQUIRE_WEB_LOGIN=TRUE`. [#25005][#25005]
-- System databases are now listed after all user databases on the [**Databases** page](https://www.cockroachlabs.com/docs/v2.1/admin-ui-databases-page). [#25817][#25817] {% comment %}doc{% endcomment %}
-- Added **Statements** and **Statement Details** pages showing fingerprints of incoming statements and basic statistics about them. [#24485][#24485]
-- Lease transfers are now shown in the **Range Operations** graph on the [**Replication** dashboard](https://www.cockroachlabs.com/docs/v2.1/admin-ui-replication-dashboard). [#26653][#26653] {% comment %}doc{% endcomment %}
-- Add a debug page showing how table data is distributed across nodes, as well as the zone configs which are affecting that distribution. [#24855][#24855] {% comment %}doc{% endcomment %}
-
-
Bug fixes
-
-- Fixed an issue where the Table details page in the Admin UI would become unresponsive after some time. [#26636][#26636]
-- Fix a bug where [`cockroach quit`](https://www.cockroachlabs.com/docs/v2.1/stop-a-node) would erroneously fail even though the node already successfully shut down. [#26158][#26158]
-- [`UPSERT`](https://www.cockroachlabs.com/docs/v2.1/upsert) is now properly able to write `NULL` values to every column in tables containing more than one column family. [#26169][#26169]
-- Fixed a bug causing index creation to fail under rare circumstances. [#26265][#26265]
-- Corrected `NULL` handling during [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import) of `MYSQLOUTFILE`. [#26275][#26275]
-- Fixed concurrent access to the same file when using encryption. [#26377][#26377]
-- Fixed a bug where a prepared query would not produce the right value for `current_date()` if prepared on one day and executed on the next. [#26370][#26370]
-- Rows larger than 8192 bytes are now supported by the "copy from" protocol. [#26345][#26345]
-- Trying to "copy from stdin" into a table that doesn't exist no longer drops the connection. [#26345][#26345]
-- CockroachDB now produces a clearer message when special functions (e.g., `generate_series()`) are used in an invalid context (e.g., `LIMIT`). [#26425][#26425]
-- Fixed a rare crash on node [decommissioning](https://www.cockroachlabs.com/docs/v2.1/remove-nodes). [#26706][#26706]
-- Commands are now abandoned earlier once a deadline has been reached. [#26643][#26643]
-- Using [`SHOW TRACE FOR SESSION`](https://www.cockroachlabs.com/docs/v2.1/show-trace) multiple times without an intervening `SET tracing` statement now properly outputs the trace without introducing extraneous duplicate rows. [#26746][#26746]
-- The output of debug and tracing commands is no longer corrupted when byte array values contain invalid UTF-8 sequences. [#26769][#26769]
-- Joins across two [interleaved tables](https://www.cockroachlabs.com/docs/v2.1/interleave-in-parent) no longer return incorrect results under certain circumstances when the equality columns aren't all part of the interleaved columns. [#26756][#26756]
-- Prepared statements using [`RETURNING NOTHING`](https://www.cockroachlabs.com/docs/v2.1/parallel-statement-execution) that are executed using the `EXECUTE` statement are now properly parallelized. [#26668][#26668]
-- The pretty-print code for `SHOW` now properly quotes the variable name, and the pretty-printing code for an index definition inside `CREATE TABLE` now properly indicates whether the index was inverted. [#26923][#26923]
-- Within a [transaction](https://www.cockroachlabs.com/docs/v2.1/transactions), DML statements are now allowed after a [`TRUNCATE`](https://www.cockroachlabs.com/docs/v2.1/truncate). [#26051][#26051]
-
-
Performance improvements
-
-- Improved the throughput of highly contended writes with the new `contentionQueue`. [#25014][#25014]
-- The performance impact of dropping a large table has been substantially reduced. [#26449][#26449]
-- Using tuples in a query no longer reverts you to single node local SQL execution. [#25860][#25860]
-- CockroachDB's internal monitoring time series are now encoded using a more efficient on-disk format to provide considerable space savings. Monitoring data written in the old format will not be converted but will still be queryable. [#26614][#26614]
-- Improved the performance of the `sortChunks` processor. [#26874][#26874]
-
-
Build Changes
-
-- Release binaries are now built with runtime AES detection. [#26649][#26649]
-
-
Doc updates
-
-- Added `systemd` configs and instructions to [deployment tutorials](https://www.cockroachlabs.com/docs/v2.1/manual-deployment). [#3268][#3268]
-- Added instructions for [importing data from Postgres dump files](https://www.cockroachlabs.com/docs/v2.1/migration-overview). [#3306][#3306]
-- Expanded the first level of the 2.1 docs sidenav by default. [#3270][#3270]
-- Updated the [Kubernetes tutorials](https://www.cockroachlabs.com/docs/v2.1/orchestrate-cockroachdb-with-kubernetes) to reflect that pods aren't "Ready" before init. [#3291][#3291]
-
-
-
-
Contributors
-
-This release includes 328 merged PRs by 35 authors. We would like to thank the following contributors from the CockroachDB community, with special thanks to first-time contributors Chris Seto and Emmanuel.
-
-- Chris Seto
-- Emmanuel
-- neeral
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-For our July 30th alpha release, in addition to PostgreSQL compatibility enhancements, general usability improvements, and bug fixes, we want to highlight some major benefits:
-
-- [**Troubleshoot performance problems with hardware metrics**](https://www.cockroachlabs.com/docs/v2.1/admin-ui-hardware-dashboard) - The new Web UI **Hardware** dashboard provides more visibility into how cluster CPU, networking, disk, and memory resources are being utilized so you can quickly identify and remove performance bottlenecks.
-- [**Easier PostgreSQL migration**](https://www.cockroachlabs.com/docs/v2.1/migration-overview) - We’ve made further enhancements to reduce PostgreSQL migration friction. Notable improvements include support for foreign keys, sequences, and `COPY` in `IMPORT ... PGDUMP`.
-- [**Monitor Kubernetes-orchestrated clusters with Prometheus**](https://www.cockroachlabs.com/docs/v2.1/orchestrate-cockroachdb-with-kubernetes) - We expanded our guides for running CockroachDB on Kubernetes in production to include setting up monitoring and alerting with Prometheus and Alertmanager.
-
-Please give these features and the ones below a try. If you see something that can be improved, we’d love to hear from you on [GitHub](https://github.com/cockroachdb/cockroach/issues) or the [Forum](https://forum.cockroachlabs.com/).
-
-
General changes
-
-- The [cost-based optimizer](https://www.cockroachlabs.com/docs/v2.1/cost-based-optimizer) is now on by default. [#26893][#26893]
-- The time series system used by CockroachDB to store internal monitoring data now utilizes pre-computed rollups to significantly increase the duration for which monitoring data is available while using less storage. Monitoring data will be available for up to a year by default; however, data older than seven days will be stored at a reduced resolution, and thus will only be able to give details about 30 minute intervals. [#27121][#27121]
-- Building CockroachDB from source now requires yarn at version 1.7.0 or above. [#27262][#27262]
-- Added support for signing server and client certificates by different CAs. [#27636][#27636]
-
-
Enterprise edition changes
-
-- Core dumps are now disabled when [encryption](https://www.cockroachlabs.com/docs/v2.1/encryption) is enabled. [#27426][#27426]
-- [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v2.1/change-data-capture) now use an asynchronous Kafka producer, increasing throughput. [#27421][#27421]
-
-
SQL language changes
-
-- CockroachDB now supports custom frame specification for [window functions](https://www.cockroachlabs.com/docs/v2.1/window-functions) using `ROWS` (fully-supported) and `RANGE` modes. For `RANGE`, ` PRECEDING` and ` FOLLOWING` are not supported. [#26666][#26666]
-- The `SNAPSHOT` [isolation level](https://www.cockroachlabs.com/docs/v2.1/transactions#isolation-levels) has been removed. Transactions that request to use it are now mapped to `SERIALIZABLE`. [#27040][#27040].
-- When the cost-based optimizer is enabled, it will also affect prepared queries. [#27034][#27034]
-- Upon failing to gather data from other nodes, the [`SHOW CLUSTER QUERIES`](https://www.cockroachlabs.com/docs/v2.1/show-queries) and [`SHOW CLUSTER SESSIONS`](https://www.cockroachlabs.com/docs/v2.1/show-sessions) statements now report the details of the error. [#26821][#26821]
-- Improved the description for the `age()` [built-in function](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators). [#27082][#27082]
-- The `pg_get_indexdef()` built-in function now supports 3 arguments. [#27161][#27161]
-- Added `COPY` support to `IMPORT .. . PGDUMP`. [#27062][#27062]
-- The new `max_row_size` option overrides default limits on line size for `IMPORT ... PGDUMP` and `PGCOPY`. [#27062][#27062]
-- The `SHOW TRACE FOR ` statement was incomplete and incorrect and has thus been removed. To turn on tracing, use `SET tracing = ...` and `SHOW SESSION TRACE`, or enable the new `auto_trace` client-side option for `cockroach sql`. [#26729][#26729] [#27805][#27805]
-- `SET tracing` accepts a new option `results`, which causes result rows and row counts to be copied to the session trace. This was previously implicit with option `kv` but must now be specified explicitly when desired. [#26729][#26729]
-- The word `view` is now supported as an identifier like in PostgreSQL. [#27204][#27204]
-- `IMPORT ... PGDUMP` no longer requires the `--no-owner` flag. [#27268][#27268]
-- `AS OF SYSTEM TIME` can now use some more complex expressions to compute the desired timestamp. [#27206][#27206]
-- Added support for the `convert_from()` and `convert_to()` built-in functions, for compatibility with PostgreSQL. For `convert_from()`, however, in contrast with PostgreSQL, the function in CockroachDB accepts NUL in the input, because null characters are valid in CockroachDB strings. [#27328][#27328]
-- The `ALTER ... EXPERIMENTAL CONFIGURE ZONE` statement now accepts arbitrary scalar expressions (including possibly containing sub-queries) to compute the YAML operand. [#27213][#27213]
-- CockroachDB now recognizes PostgreSQL's abbreviated time units when converting strings to intervals. [#27393][#27393]
-- Sorting with a limit and/or input ordering now falls back to disk. [#27083][#27083]
-- CockroachDB now reports a hint in the error message if it encounters a correlated query that it does not
- support yet. [#27396][#27396]
-- The new `EXPERIMENTAL_RELOCATE LEASE` command for `ALTER TABLE` and `ALTER INDEX` allows manually transferring the leases for specific ranges to specific stores. [#26436][#26436]
-- Added the `sql.distsql.flow_stream_timeout` and `sql.distsql.max_running_flows` [cluster settings](https://www.cockroachlabs.com/docs/v2.1/cluster-settings) to fine-tune flow setup [#27404][#27404]
-- `IMPORT` now supports a `WITH oversample = ...` option to decrease variance in data distribution during processing. [#27341][#27341]
-- `IMPORT ... PGDUMP` now supports foreign keys. [#27425][#27425]
-- `IMPORT ... PGDUMP` now supports sequences. [#27739][#27739]
-- `IMPORT ... PGDUMP` now supports empty and public schemas. [#27782][#27782]
-- `SHOW JOBS` now reports results even when a job entry is incomplete or incorrect. [#27430][#27430]
-- The column labels in the output of `EXPLAIN` and all `SHOW` statements have been renamed for consistency. [#27098][#27098]
-- The column labels in the output of `SHOW COLUMNS` have been renamed for consistency with `information_schema`. The new `generation_expression` column reports the expression used for computed columns. [#27098][#27098]
-- The `SHOW CREATE` statement has been simplified and can be used equivalently on tables, views, and sequences without having to specify the type of object to inspect. [#27098][#27098]
-- Added the `chr()` built-in function (the inverse of `ascii()`). [#27278][#27278]
-- Added support for skipping foreign keys in `IMPORT`s that support them. [#27606][#27606]
-- The new `sql.optimizer.count` metric has been added to track the number of queries with the experimental
- cost-based optimizer. [#26981][#26981]
-- More statement types are now reported in the collected statement statistics in the web UI and diagnostics reporting. [#27646][#27646]
-- Added support for KV traces (`SHOW KV TRACE FOR SESSION`) on DistSQL-executed queries. [#27802][#27802]
-- The return type of single-column generator functions has been changed from `tuple{columnType}` to `columnType`. This is a compatibility change to match the behavior of PostgreSQL. [#27773][#27773]
-
-
Command-line changes
-
-- CockroachDB now computes the correct number of replicas on down nodes. Therefore, when [decommissioning nodes](https://www.cockroachlabs.com/docs/v2.1/remove-nodes) via the [`cockroach node decommission`](https://www.cockroachlabs.com/docs/v2.0/view-node-details) command, the `--wait=all` option no longer hangs indefinitely when there are down nodes. As a result, the `--wait=live` option is no longer necessary and has been deprecated. The `--wait=all` option is now the default. [#27027][#27027]
-- Added the `cockroach sqlfmt` command for formatting SQL statements. [#27240][#27240]
-- The output labels of `cockroach user ls` and `cockroach user get` have been renamed for consistency with the SQL `SHOW USERS` statement. Also, to reduce inadvertent data leaks, the output of `cockroach user get` no longer includes hashed passwords. [#27098][#27098]
-- The new client-side option `prompt1` can be used to customize the `cockroach sql` interactive prompt. [#27803][#27803]
-- The new `auto_trace` client-side option can be use to turn on tracing for a `cockroach sql` session. [#27805][#27805]
-
-
Web UI changes
-
-- The new **Hardware** dashboard displays time series data about CPU, memory, and disk and network IO. [#27626][#27626]
-- Time series metric metadata is now available at `/_admin/metricmetada`. [#25359][#25359]
-- Encryption progress is now reported on `/#/reports/stores/local` debug page. [#26802][#26802]
-- Statement statistics can now be filtered by app on the **Statements** page. [#26949][#26949] {% comment %}doc{% endcomment %}
-- Improved the readability of the mean and standard deviation bar chart on the **Statement Details** page. [#26949][#26949] {% comment %}doc{% endcomment %}
-- Added a visualization of the standard deviation of the latency of statements to the **Statements** page. [#26949][#26949] {% comment %}doc{% endcomment %}
-- The **Statements** page now shows statements that executed on all nodes in the cluster, not just the gateway node. [#26605][#26605] {% comment %}doc{% endcomment %}
-- The **Statement Details** page now includes a table showing statistics broken down by which node was the gateway node. [#26605][#26605] {% comment %}doc{% endcomment %}
-
-
Bug fixes
-
-- Fixed the ordering of columns in the [`cockroach node status`](https://www.cockroachlabs.com/docs/v2.1/view-node-details) output. [#27042][#27042]
-- Fixed a bug that would make the **Statement Details** page in the Web UI break if a statement wasn't found. [#27105][#27105]
-- Fixed some incorrectly typed columns in the `pg_index` virtual table. [#27723][#27723]
-- Fixed permissions and audit logging issues with the optimizer. [#27108][#27108]
-- Prevented a situation in which ranges repeatedly fail to perform a split. [#26934][#26934]
-- Fixed a crash that could occur when distributed `LIMIT` queries were run on a cluster with at least one unhealthy node. [#26950][#26950]
-- Failed [`IMPORT`s](https://www.cockroachlabs.com/docs/v2.1/import) now begin to clean up partially imported data immediately and in a faster manner. [#26959][#26959]
-- `IMPORT` now detects node failure and will restart instead of fail. [#26881][#26881]
-- Fixed a panic in the optimizer with `IN` filters. [#27053][#27053]
-- Fixed a panic that could occur when renaming a scalar function used as a data source. [#27039][#27039]
-- The server will not finalize a version upgrade automatically and erroneously if there are nodes temporarily inactive in the cluster. [#26821][#26821] {% comment %}doc{% endcomment %}
-- The `DISTINCT ON` clause is now reported properly in statement statistics. [#27221][#27221]
-- Fixed a panic in [`IMPORT`](https://www.cockroachlabs.com/docs/v2.0/import) when creating a table using a sequence operation (e.g., `nextval()`) in a column's [DEFAULT](https://www.cockroachlabs.com/docs/v2.0/default-value) expression. [#27122][#27122]
-- `SET` now properly rejects attempts to use
- invalid variable names starting with `tracing.`. [#27216][#27216]
-- Fixed `NULL` equality handling in the experimental lookup join feature. [#27336][#27336]
-- `ALTER ... EXPERIMENTAL CONFIGURE ZONE` is now properly tracked in statement statistics. [#27213][#27213]
-- Invalid uses of set-generating functions in `FROM` clauses are now reported with the same error code as PostgreSQL. [#27390][#27390]
-- The number of `COPY` columns is now correctly verified during `IMPORT ... PGDUMP`. [#27345][#27345]
-- `CHANGFEED`s now correctly emit all versions of quickly changing rows. [#27612][#27612]
-- Alleviated a scenario in which a large number of uncommitted Raft commands could cause memory pressure at startup time. [#27009][#27009]
-- Prevented the unbounded growth of the Raft log caused by a loss of quorum. [#27774][#27774]
-- Foreign key references in `IMPORT ... PGDUMP` are now processed in the correct order. [#27782][#27782]
-
-
Performance improvements
-
-- Transactional writes are now pipelined when being replicated and when being written to disk, dramatically reducing the latency of transactions that perform multiple writes. This can be disabled using the new `kv.transaction.write_pipelining_enabled` [cluster setting](https://www.cockroachlabs.com/docs/v2.1/cluster-settings). [#26599][#26599] {% comment %}doc{% endcomment %}
-- Reduced CPU utilization in clusters with many ranges, also during periods of lease rebalancing. [#26910][#26910] [#26907][#26907]
-- Reduced the memory size of commonly used Request and Response objects. [#27112][#27112]
-- Improved low-level iteration performance. [#27299][#27299]
-- Prevented a scenario when dropping a table could cause excessive compaction activity that would degrade performance significantly. [#27353][#27353]
-- Limited the scanner from running incessantly on stores with 100s of thousands of replicas. [#27441][#27441]
-- Prevented dead nodes in clusters with many ranges from causing unnecessarily high CPU usage. [#26911][#26911]
-- Significantly reduce CPU usage when a large number of ranges are deleted from a node. [#27520][#27520]
-- Min, max, sum, and avg now take linear time when used for aggregation as window functions for all supported window frame options. [#26988][#26988]
-- `CHANGEFEED`s no longer hold all data for each poll in memory at once, increasing scalability. [#27612][#27612] {% comment %}doc{% endcomment %}
-
-
Build changes
-
-- Upgraded protobuf to 3.6.0 [#26935][#26935]
-
-
Doc updates
-
-- Added a tutorial on [benchmarking CockroachDB with TPC-C](https://www.cockroachlabs.com/docs/v2.1/performance-benchmarking-with-tpc-c). [#3281][#3281]
-- Expanded the [Production Checklist](https://www.cockroachlabs.com/docs/v2.1/recommended-production-settings#networking) to cover a detailed explanation of network flags and scenarios and updated [production deployment tutorials](https://www.cockroachlabs.com/docs/v2.1/manual-deployment) to encourage the use of `--advertise-host` on node start. [#3352][#3352]
-- Expanded the [Kubernetes tutorials](https://www.cockroachlabs.com/docs/v2.1/orchestrate-cockroachdb-with-kubernetes) to include setting up monitoring and alerting with Prometheus and Alertmanager. [#3370][#3370]
-- Updated the [rolling upgrade tutorial](https://www.cockroachlabs.com/docs/v2.1/upgrade-cockroach-version) with explicit `systemd` commands. [#3396][#3396]
-- Updated the [OpenSSL certificate tutorial](https://www.cockroachlabs.com/docs/v2.1/create-security-certificates-openssl) to allow multiple node certificates with the same subject. [#3423][#3423]
-- Added an example on [editing SQL statements in an external editor from within the built-in SQL shell](https://www.cockroachlabs.com/docs/v2.1/use-the-built-in-sql-client#edit-sql-statements-in-an-external-editor). [#3425][#3425]
-
-
-
-
Contributors
-
-This release includes 328 merged PRs by 42 authors. We would like to thank the following contributors from the CockroachDB community, with special thanks to first-time contributors Art Nikpal, Ivan Kozik, Tarek Badr, and nexdrew.
-
-- Art Nikpal
-- Brett Snyder
-- Ivan Kozik
-- Nishant Gupta
-- Tarek Badr
-- neeral
-- nexdrew
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-We have now transitioned into the CockroachDB 2.1 Beta phase and will be releasing weekly until the GA release. This week's release includes PostgreSQL compatibility enhancements, general usability improvements, performance improvements, and bug fixes. In addition, we want to highlight a few major benefits:
-
-- **Automatic performance optimizations** - Range leases are now automatically rebalanced throughout the cluster to even out the amount of QPS being handled by each server.
-- **Better controls for geo-distributed clusters** - We’ve added more sophisticated support for controlling the network interfaces to use in certain situations, so nodes can prefer local, private IPs for inter-DC communication, and only use public IPs when making hops that must go over the open internet. See the `--locality-advertise-addr` flag of the [`cockroach start`](https://www.cockroachlabs.com/docs/v2.1/start-a-node) command for more details.
-
-
Backward-incompatible changes
-
-- Support for PostgreSQL's `TIMETZ` data type has been removed due to incomplete/incorrect implementation. This
-feature was available only in previous 2.1 alpha releases. Before upgrading to this release, tables with the `TIMETZ` type must be dropped entirely; it is not possible to convert the data or drop a single `TIMETZ` column. [#28095][#28095] {% comment %}doc{% endcomment %}
-- Support for the `BIT` data type has been removed due to incorrect implementation and incompatibility with some client apps. Tables with the `BIT` type will continue to work but will see their type automatically changed to `INT` in the output of `SHOW TABLES`, `information_schema`, etc. This is backward-compatible insofar that the previous `BIT` type in CockroachDB was actually a simple integer. A PostgreSQL-compatible replacement will likely be added at a later time. [#28814][#28814] {% comment %}doc{% endcomment %}
-
-
General changes
-
-- CockroachDB now supports a separate CA (`ca-ui.crt`) and certificate (`ui.crt`) for the Web UI. [#27916][#27916] {% comment %}doc{% endcomment %}
-- The ability to set lease placement preferences in [replication zones](https://www.cockroachlabs.com/docs/v2.1/configure-replication-zones) is now fully supported. Existing lease placement preferences will continue to function as in v2.0. [#28261][#28261] {% comment %}doc{% endcomment %}
-- The new `/_admin/v1/enqueue_range` admin server endpoint runs a specified range through a specified internal queue on one or all nodes. The `skip_should_queue` parameter can also be specified to tell the system to blindly run without first checking whether it needs to be run. This endpoint is intended primarily for debugging purposes. [#26554][#26554] {% comment %}doc{% endcomment %}
-- If enabled, anonymous [diagnostics reporting](https://www.cockroachlabs.com/docs/v2.1/diagnostics-reporting) now includes hardware and OS information as well as basic stats about the size of `IMPORT` jobs. [#28676][#28676] [#28726][#28726] {% comment %}doc{% endcomment %}
-
-
Enterprise edition changes
-
-- This release includes several changes to the [Change Data Capture](https://www.cockroachlabs.com/docs/v2.1/change-data-capture) feature:
- - `CHANGEFEED`s now support interleaved tables [#27991][#27991] {% comment %}doc{% endcomment %}
- - `CREATE CHANGEFEED` now requires an enterprise license when used with Kafka. [#27962][#27962] {% comment %}doc{% endcomment %}
- - `CHANGEFEED`s now produce an error when column families are added (instead of returning incorrect results) and when targeting `system` tables (instead of operating with undefined behavior). [#27962][#27962] {% comment %}doc{% endcomment %}
- - `CREATE CHANGEFEED` is now restricted to superusers. [#27962][#27962] {% comment %}doc{% endcomment %}
- - `CHANGEFEED` job descriptions now substitute values for SQL placeholders. [#28220][#28220] {% comment %}doc{% endcomment %}
- - `CHANGEFEED`s can now only target lists of physical tables. [#27996][#27996] {% comment %}doc{% endcomment %}
- - `CHANGEFEED`s now produce an error when a watched table is truncated, dropped, or renamed. [#28204][#28204] {% comment %}doc{% endcomment %}
- - `CHANGEFEED` Kafka tunings have been adjusted for faster flushes, improving throughput. [#28586][#28586]
- - `CHANGEFEED`s now checkpoint progress more granularly. [#28319][#28319]
- - `CHANGEFEED`s now export metrics for production monitoring. [#28162][#28162] {% comment %}doc{% endcomment %}
- - The `CHANGEFEED` `timestamp` option has been split into `updated` and `resolved`. [#28733][#28733] {% comment %}doc{% endcomment %}
- - `CHANGEFEED`s are now executed using our distributed SQL framework. [#28555][#28555] {% comment %}doc{% endcomment %}
-- This release includes the following changes to the [Encryption At Rest](https://www.cockroachlabs.com/docs/v2.1/encryption) feature:
- - The status of encryption is now written to [debug logs](https://www.cockroachlabs.com/docs/v2.1/debug-and-error-logs). [#27880][#27880] {% comment %}doc{% endcomment %}
- - Data keys are now rotated while nodes are running. [#28148][#28148] {% comment %}doc{% endcomment %}
- - The new `cockroach debug encryption-status` command displays encryption key information. [#28582][#28582] {% comment %}doc{% endcomment %}
-
-
SQL language changes
-
-- Added foreign key support to [`IMPORT ... MYSQLDUMP`](https://www.cockroachlabs.com/docs/v2.1/import). [#27861][#27861] {% comment %}doc{% endcomment %}
-- The output of [`SHOW GRANTS`](https://www.cockroachlabs.com/docs/v2.1/show-grants) is now fully sorted. [#27884][#27884] {% comment %}doc{% endcomment %}
-- Reads from Google Cloud Storage for [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import) or [`BACKUP`](https://www.cockroachlabs.com/docs/v2.1/backup) jobs are now more resilient to quota limits. [#27862][#27862]
-- The [`ORDER BY INDEX`](https://www.cockroachlabs.com/docs/v2.1/query-order#sorting-in-index-order) notation now implies an ordering by the implicit primary key columns appended to an index. [#27812][#27812] {% comment %}doc{% endcomment %}
-- Added the `server_encoding` [session variable](https://www.cockroachlabs.com/docs/v2.1/show-vars) and protocol status parameter, for compatibility with PostgreSQL. It is set to `UTF8` and cannot be changed. [#27943][#27943] {% comment %}doc{% endcomment %}
-- Extended support of the `extra_float_digits` [session variable](https://www.cockroachlabs.com/docs/v2.1/show-vars), for compatibility with PostgreSQL. [#27952][#27952] {% comment %}doc{% endcomment %}
-- Improved the handling of [`SET`](https://www.cockroachlabs.com/docs/v2.1/set-vars), [`RESET`](https://www.cockroachlabs.com/docs/v2.1/reset-vars) and [`SHOW`](https://www.cockroachlabs.com/docs/v2.1/show-vars), for better compatibility with PostgreSQL. [#27947][#27947] {% comment %}doc{% endcomment %}
-- Exposed the `integer_datetimes` session variable in `SHOW` and `pg_settings`, for compatibility with PostgreSQL. [#27947][#27947] {% comment %}doc{% endcomment %}
-- The default values of the `client_min_messages` and `extra_float_digits` [session variables](https://www.cockroachlabs.com/docs/v2.1/show-vars) now match PostgreSQL. [#27947][#27947] {% comment %}doc{% endcomment %}
-- Corrected the `oids` and formatting of some columns in the `pg_catalog.pg_index` table. [#27961][#27961]
-- The distribution of queries that use the `repeat()` [built-in function](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators) are now permitted. [#28039][#28039]
-- Statement statistics are now grouped separately for queries using the [cost-based optimizer](https://www.cockroachlabs.com/docs/v2.1/cost-based-optimizer) and heuristic planner. [#27806][#27806]
-- CockroachDB now supports empty tuples with the syntax `()`, 1-valued tuples with the syntax `(x,)` in addition to `row(x)`, and the ability to use `IN` with an empty tuple as right operand. This is a CockroachDB extension. [#28143][#28143] {% comment %}doc{% endcomment %}
-- CockroachDB now supports constructing array values using parentheses, for example `ARRAY(1,2,3,4)`. This is a CockroachDB extension; the standard PostgreSQL syntax `ARRAY[1,2,3,4]` remains supported. [#28238][#28238] {% comment %}doc{% endcomment %}
-- CockroachDB now supports converting arrays and tuples to strings, for compatibility with PostgreSQL. [#28183][#28183] {% comment %}doc{% endcomment %}
-- `ANY`/`ALL`/`SOME` comparisons are now more permissive about the types of their input expressions, and comparisons with empty tuples are now allowed. [#28226][#28226]
-- Improved the handling of decimal 0s. Specifically, -0 is coerced to 0 and values like 0.00 retain the digits after the decimal point. [#27978][#27978]
-- Arrays of arrays are no longer allowed, even as intermediate results. [#28116][#28116]
-- [`IMPORT ... PGDUMP`](https://www.cockroachlabs.com/docs/v2.1/import) now supports CockroachDB dump files. [#28359][#28359] {% comment %}doc{% endcomment %}
-- The decimal variants of the `ceil()` and `ceiling()` functions now return 0 where they would have returned -0 previously. [#28366][#28366]
-- Improved support for S3-compatible endpoints in [`BACKUP`](https://www.cockroachlabs.com/docs/v2.1/backup), [`RESTORE`](https://www.cockroachlabs.com/docs/v2.1/restore), and [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import). The `AWS_REGION` parameter is no longer required. Services like Digital Ocean Spaces and Minio now work correctly. [#28394][#28394] {% comment %}doc{% endcomment %}
-- CockroachDB now supports an optional `FILTER` clause with aggregates when used as [window functions](https://www.cockroachlabs.com/docs/v2.1/window-functions). [#28357][#28357]
-- Normalized the case of table names imported via [`IMPORT ... MYSQLDUMP`](https://www.cockroachlabs.com/docs/v2.1/import). [#28397][#28397] {% comment %}doc{% endcomment %}
-- All queries now run through the DistSQL execution engine. [#27863][#27863]
-- It is now an error to specify both `FORCE_INDEX` and `NO_INDEX_JOIN` hints at the same time. [#28411][#28411]
-- Added `numeric_precision_radix` to the `information_schema.columns` table. [#28467][#28467] {% comment %}doc{% endcomment %}
-- Added the `schemachanger.lease.duration` and `schemachanger.lease.renew_fraction` [cluster settings](https://www.cockroachlabs.com/docs/v2.1/cluster-settings) to control the schema change lease. [#28342][#28342] {% comment %}doc{% endcomment %}
-- Added the `string_agg()` aggregation function, which concatenates a collection of strings into a single string and separates them with a specified delimiter. [#28392][#28392] {% comment %}doc{% endcomment %}
-- CockroachDB now fully supports the `RANGE` mode for specification of [window function frames](https://www.cockroachlabs.com/docs/v2.1/window-functions). [#27022][#27022] {% comment %}doc{% endcomment %}
-- CockroachDB now supports the `GROUPS` mode for specification of [window function frames](https://www.cockroachlabs.com/docs/v2.1/window-functions). [#28244][#28244] {% comment %}doc{% endcomment %}
-- CockroachDB now supports the `ARRAY()` operator and comparisons with sub-queries on the right side of the comparison, when they appear themselves in sub-queries. [#28618][#28618]
-- CockroachDB now supports two experimental compatibility modes with how PostgreSQL handles [`SERIAL`](https://www.cockroachlabs.com/docs/v2.1/serial) and [sequences](https://www.cockroachlabs.com/docs/v2.1/create-sequence), to ease reuse of 3rd party frameworks or apps developed for PostgreSQL. These modes can be enabled with the `experimental_serial_normalization` session variable (per client) and `sql.defaults.serial_normalization` cluster setting (cluster-wide). The first mode, `virtual_sequence`, enables compatibility with many applications using `SERIAL` with maximum performance and scalability. The second mode, `sql_sequence`, enables maximum PostgreSQL compatibility but uses regular SQL sequences and is thus subject to performance constraints. [#28575][#28575] {% comment %}doc{% endcomment %}
-- The output of [`SHOW COLUMNS`](https://www.cockroachlabs.com/docs/v2.1/show-columns) now indicates which columns are hidden. [#28750][#28750] {% comment %}doc{% endcomment %}
-- [`SHOW CREATE`](https://www.cockroachlabs.com/docs/v2.1/show-create) now reports the `FLOAT` column types as `FLOAT4` and `FLOAT8` (the default) instead of `REAL` and `FLOAT`. [#28776][#28776]
-
-
Command-line changes
-
-- This release includes the following changes to the [`cockroach start`](https://www.cockroachlabs.com/docs/v2.1/start-a-node) command:
- - The new `--listen-addr` flag recognizes both a hostname/address and port and replaces the `--host` and `--port` flags, which are now deprecated for `cockroach start` but remain valid for other *client* commands. The port portion of `--listen-addr` can be either a service name or numeric value; when specified as `0`, a port number is automatically allocated. [#27800][#27800] [#28373][#28373] [#28502][#28502] {% comment %}doc{% endcomment %}
- - The new `--advertise-addr` flag recognizes both a hostname/address and port and replaces the `--advertise-host` and `--advertise-port` flags, which are now deprecated. The port portion of `--advertise-addr` can be either a service name or numeric value; when specified as `0`, a port number is automatically allocated. [#27800][#27800] [#28373][#28373] [#28502][#28502] {% comment %}doc{% endcomment %}
- - The new `--http-addr` flag recognizes both a hostname/address and port and replaces the `--http-host` flag, which is now deprecated. The port portion of `--http-addr` can be either a service name or numeric value; when specified as `0`, a port number is automatically allocated. [#28373][#28373] [#28502][#28502] {% comment %}doc{% endcomment %}
- - The new `--locality-advertise-addr` flag can be used advertise a hostname/address and port to other CockroachDB nodes for specific [localities](https://www.cockroachlabs.com/docs/v2.1/start-a-node#locality). This is useful in deployments with "local" or "private" interfaces that are only accessible by a subset of the nodes and "global" or "public" interfaces that are slower or more expensive but accessible by any node. In such cases, `--locality-advertise-addr` can be used to route traffic over the local interface whenever possible. [#28531][#28531]
- - The command now reports the URL of the web UI with the prefix "`webui:`", not `admin:`. [#28038][#28038] {% comment %}doc{% endcomment %}
- - The command now reports a warning if more than 75% of available RAM is reserved by `--cache` and `--max-sql-memory`. [#28199][#28199] {% comment %}doc{% endcomment %}
- - The command now suggests which command-line flags to use to access the newly started node in client commands (e.g., `cockroach quit`, etc.). [#28198][#28198] [`cockroach start`](https://www.cockroachlabs.com/docs/v2.1/start-a-node) {% comment %}doc{% endcomment %}
-- This release includes the following changes to `cockroach` client commands:
- - Client commands now better attempt to inform the user about why a connection is failing. [#28200][#28200] {% comment %}doc{% endcomment %}
- - Client commands that print out SQL results now issue a warning if more than 10000 result rows are buffered in the `table` formatter. [#28490][#28490]
- - Client commands that use a SQL connection (e.g., `cockroach sql`, `cockroach node`, `cockroach user`) now produce an error if a connection could not be established within 5 seconds instead of waiting forever. [#28326][#28326] {% comment %}doc{% endcomment %}
- - The [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.1/use-the-built-in-sql-client) command and other client commands that display SQL results now use the new `table` result formatter by default, replacing the previous formatter called `pretty`. This provides more compact and more reusable results. [#28465][#28465] {% comment %}doc{% endcomment %}
- - The [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.1/use-the-built-in-sql-client) command and other client commands that display SQL results containing byte arrays now print them as if they were converted by a SQL cast to the `STRING` type. [#28494][#28494] {% comment %}doc{% endcomment %}
- - The `--host` flag and `COCKROACH_HOST` environment variable for client commands now recognize both a hostname/address and port number. The `--port` flag is still recognized but no longer documented; `--host` is now preferred. The `COCKROACH_PORT` environment variable is now deprecated in favor of `COCKROACH_HOST`. Also, the syntax to specify IPv6 addresses has been changed to use square brackets, for example, `--host=[::1]` instead of just `--host=::1`; the previous syntax is still recognized for backward compatibility but is deprecated. [#28373][#28373] {% comment %}doc{% endcomment %}
-- The new `timeseries.storage.10s_resolution_ttl` and `timeseries.storage.30m_resolution_ttl` [cluster settings](https://www.cockroachlabs.com/docs/v2.1/cluster-settings) control how long time series data is retained on the cluster. They work with the recently added "roll-ups" to allow longer retention of time series data while consuming considerably less disk space. [#28169][#28169] {% comment %}doc{% endcomment %}
-- The [`cockroach demo`](https://www.cockroachlabs.com/docs/v2.1/cockroach-demo) command now supports starting with one of various datasets loaded. [#28383][#28383] {% comment %}doc{% endcomment %}
-- The file generated by running `cockroach debug zip` now contains the contents of the `system.rangelog` table, which is a record of range splits and rebalances in the cluster. The problem ranges report is now included as well. [#28396][#28396] [#28253][#28253] {% comment %}doc{% endcomment %}
-- The `cockroach node status` command now works on unavailable/broken clusters. [#28249][#28249] {% comment %}doc{% endcomment %}
-- CockroachDB now reports a non-zero exit status if an attempt is made to use a non-existent command. [#28492][#28492]
-- CockroachDB now attempts to inform the operator if the names and IP addresses listed in the configured certificates do not match the server configuration. [#28502][#28502] {% comment %}doc{% endcomment %}
-- Added a locality filter for the `cockroach gen haproxy` command [#28649][#28649] {% comment %}doc{% endcomment %}
-
-
Web UI changes
-
-- Added disk read and write time charts to the [**Hardware** dashboard](https://www.cockroachlabs.com/docs/v2.1/admin-ui-hardware-dashboard). [#27977][#27977] [#28594][#28594] {% comment %}doc{% endcomment %}
-- The [**Hardware** dashboard](https://www.cockroachlabs.com/docs/v2.1/admin-ui-hardware-dashboard) now shows system and user CPU summed instead of separately, and normalized by number of CPUs. [#28596][#28596] {% comment %}doc{% endcomment %}
-- Added a link to the [**Statements** page](https://www.cockroachlabs.com/docs/v2.1/admin-ui-statements-page) from the sidebar. [#27928][#27928] {% comment %}doc{% endcomment %}
-- The [**Statements** page](https://www.cockroachlabs.com/docs/v2.1/admin-ui-statements-page) now reveals whether a SQL query used the new cost-based optimizer. [#28094][#28094] {% comment %}doc{% endcomment %}
-- Added the number of CPUs and percentages of memory and disk usage to the [**Node List**](https://www.cockroachlabs.com/docs/v2.1/admin-ui-overview-dashboard). [#28189][#28189] {% comment %}doc{% endcomment %}
-- Removed "distsql reads" time series from the [**SQL** dashboard](https://www.cockroachlabs.com/docs/v2.1/admin-ui-sql-dashboard), since execution engines are being merged. [#28350][#28350] {% comment %}doc{% endcomment %}
-- The **Problem Ranges** report now shows the number of replicas that have an excessively large log. [#28034][#28034] {% comment %}doc{% endcomment %}
-- The **Stores** report now shows encryption statistics. [#26890][#26890] {% comment %}doc{% endcomment %}
-- Login is now required by default on secure clusters. [#28416][#28416] {% comment %}doc{% endcomment %}
-- Enlarged the clickable area on dropdown components to include entirety of the surrounding container. [#28331][#28331]
-- The [**Jobs** page](https://www.cockroachlabs.com/docs/v2.1/admin-ui-jobs-page) now supports indefinitely-running job types that have a "highwater timestamp", instead of the "fraction completed" used by jobs with a finite task. [#28535][#28535] {% comment %}doc{% endcomment %}
-- Improved the alert text that is displayed when the Web UI connection is lost. [#28838][#28838]
-
-
Bug fixes
-
-- Fixed a bug where the [**Statements** page](https://www.cockroachlabs.com/docs/v2.1/admin-ui-statements-page) in the Web UI blanked out after reloading itself. [#28108][#28108]
-- CockroachDB no longer erroneously allows generator functions, aggregates, and window functions in the `ON` clause of joins. [#28839][#28839]
-- Fixed an `index-id does not exist` error that could happen on [`ADD COLUMN`](https://www.cockroachlabs.com/docs/v2.1/add-column) or [`DROP COLUMN`](https://www.cockroachlabs.com/docs/v2.1/drop-column). [#28803][#28803]
-- Fixed row counts in the output of [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import). [#28469][#28469] {% comment %}doc{% endcomment %}
-- Fixed various problems related to the rollback of [schema changes](https://www.cockroachlabs.com/docs/v2.1/online-schema-changes).[#28014][#28014] [#28050][#28050]
-- Prevented a node from freezing after `DROP DATABASE` when the command aborts, and fixed the rare use of an older descriptor after [`DROP INDEX`](https://www.cockroachlabs.com/docs/v2.1/drop-index). [#28381][#28381]
-- Fixed the handling of regular aggregations combined with window functions and columns "as-is". [#27897][#27897]
-- Fixed a panic caused by key-value tracing a plan that uses an index joiner. [#27942][#27942]
-- The `bytea_output` [session variable](https://www.cockroachlabs.com/docs/v2.1/set-vars) is now properly effective for distributed queries. [#27951][#27951]
-- Limited the size of "batch groups" when committing a batch to RocksDB to avoid rare scenarios in which multi-gigabyte batch groups are created, which can cause a server to run out of memory when replaying the RocksDB log at startup. [#27895][#27895]
-- Fixed the round-tripping of cast expression formatting in the presence of [collated strings](https://www.cockroachlabs.com/docs/v2.1/collate). [#27941][#27941]
-- Prevented spurious query errors when planning some complex correlated SRFs through the distributed execution engine. [#27995][#27995]
-- Fixed the handling of frame boundary offsets in `WINDOW` clauses. [#27933][#27933]
-- Fixed the formatting of time datatypes in some circumstances. [#28040][#28040]
-- Fixed the behavior of `crdb_internal.cluster_id()` in distributed queries. [#28042][#28042]
-- Fixed incorrect `NULL` handling in the distributed implementations of `INTERSECT` and `EXCEPT`. [#28097][#28097]
-- Corrected erroneous failures of privileged built-ins in queries run through the distributed execution engine. [#28107][#28107]
-- Ensured that the [`TIMESTAMP`](https://www.cockroachlabs.com/docs/v2.1/timestamp) data type never retains a timezone and renders consistently across a distributed computation flow. [#28112][#28112]
-- Corrected casts and binary operators between `TIMESTAMPTZ` and `TIMESTAMP` in some cases. [#28128][#28128]
-- Prevented some [sequence built-ins](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators#sequence-functions) from incorrectly running in distributed flows. [#28114][#28114]
-- Corrected the round-trip formatting of negative floats and decimals in the context of other expressions when executing in a distributed flow. [#28129][#28129]
-- Fixed bug that could skip the row following a deleted row during [`BACKUP`](https://www.cockroachlabs.com/docs/v2.1/backup). [#28172][#28172]
-- The [`cockroach user set --password`](https://www.cockroachlabs.com/docs/v2.1/create-and-manage-users) command can now change the password of existing users. [#28197][#28197] {% comment %}doc{% endcomment %}
-- CockroachDB now supports a wider range of tuple and array values in query results. [#28151][#28151]
-- This release includes the following fixes to the [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.1/use-the-built-in-sql-client) and [`cockroach demo`](https://www.cockroachlabs.com/docs/v2.1/cockroach-demo) command:
- - The commands are now properly able to customize the prompt with `~/.editrc` on Linux. [#28233][#28233] {% comment %}doc{% endcomment %}
- - The commands once again support copy-pasting special unicode character from other documents. [#28233][#28233] {% comment %}doc{% endcomment %}
- - The commands once again properly handle copy-pasting a mixture of client-side commands (e.g., `set`) and SQL statements. [#28235][#28235]
- - The commands now properly print a warning when a `?` character is mistakenly used to receive contextual help in a non-interactive session, instead of crashing. [#28324][#28324]
- - The commands now work properly even when the `TERM` environment variable is not set. [#28613][#28613]
-- Generator built-ins now correctly return no rows instead of `NULL` when given `NULL` arguments. [#28252][#28252]
-- Fixed out-of-memory errors caused by very large raft logs. [#28293][#28293] [#28511][#28511]
-- Certain queries that use empty arrays constructed from subqueries no longer spuriously fail when executed via the distributed execution engine. [#28391][#28391]
-- `SHOW JOBS` now uses placeholder values for `BACKUP` and `RESTORE` job descriptions. [#28321][#28321]
-- CockroachDB now handles negative `FLOAT` zeros properly in more cases. [#28569][#28569]
-- CockroachDB now correctly handles computation of `array_agg()` when used as a [window function](https://www.cockroachlabs.com/docs/v2.1/window-functions). [#28291][#28291]
-- [Decommissioning multiple nodes](https://www.cockroachlabs.com/docs/v2.1/remove-nodes) is now possible without posing a risk to cluster health. Recommissioning a node no longer requires a restart of the target node to take effect. [#28707][#28707] {% comment %}doc{% endcomment %}
-- Fixed a rare scenario where the value written for one system key was seen when another system key was read, leading to the violation of internal invariants. [#28794][#28794]
-- Hidden columns are now listed in `information_schema` and `pg_catalog` tables, for better compatibility with PostgreSQL. [#28750][#28750] {% comment %}doc{% endcomment %}
-- Casting arrays now correctly preserves `NULL` values. [#28860][#28860]
-- `IMPORT` no longer silently converts `rn` characters in CSV files into `n`. [#28181][#28181]
-- Fixed initial poor latencies introduced in a recent release. [#28599][#28599]
-
-
Performance improvements
-
-- CockroachDB now periodically refreshes table leases to avoid initial latency on tables that have not been accessed recently. [#28725][#28725] {% comment %}doc{% endcomment %}
-- Improved the fixed cost of running distributed sql queries. [#27899][#27899]
-- Prevent large buffer allocation for DML statements with `RETURNING` clauses. [#27944][#27944]
-- Improved low-level iteration performance in the presence of range tombstones. [#27904][#27904]
-- Data ingested with `RESTORE` and `IMPORT` is now eligible for a performance optimization used in incremental `BACKUP` and `CHANGEFEED`s. [#27966][#27966]
-- Reduced lock contention in `RemoteClockMonitor`. [#28000][#28000]
-- Reduced lock contention in the Replica write path. [#24990][#24990]
-- Reduced lock contention in the Gossip server. [#28001][#28001] [#28127][#28127]
-- Reduced lock contention and avoided allocations in `raftEntryCache`. [#27997][#27997]
-- Fixed a batch commit performance regression that reduced write performance by 20%. [#28163][#28163]
-- Greatly improved the performance of catching up followers that are behind when Raft logs are large. [#28511][#28511]
-- Slightly improved the performance of the `nextval()` [sequence function](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators#sequence-functions). [#28576][#28576]
-- Reduced the cost of Raft log truncations and increased single-range throughput. [#28126][#28126]
-- Subqueries are now run through the distributed execution engine. [#28580][#28580]
-- Range leases are now automatically rebalanced throughout the cluster to even out the amount of QPS being handled by each node. [#28340][#28340] {% comment %}doc{% endcomment %}
-- Greatly improved the performance of deleting from interleaved tables that have `ON DELETE CASCADE` clauses. [#28330][#28330]
-
-
Doc updates
-
-- Added a tutorial on [orchestrating CockroachDB across multiple Kubernetes clusters in different regions](https://www.cockroachlabs.com/docs/v2.1/orchestrate-cockroachdb-with-kubernetes-multi-cluster). [#3558](https://github.com/cockroachdb/docs/pull/3558)
-- Expanded the [Build an App](https://www.cockroachlabs.com/docs/v2.1/build-an-app-with-cockroachdb) tutorials for most languages to offer instructions and code samples for secure clusters. [#3557](https://github.com/cockroachdb/docs/pull/3557)
-- Significantly expanded the documentation on [Window Functions](https://www.cockroachlabs.com/docs/v2.1/window-functions). [#3426](https://github.com/cockroachdb/docs/pull/3426)
-- Added a conceptual explanation of [Online Schema Changes](https://www.cockroachlabs.com/docs/v2.1/online-schema-changes), with examples and current limitations. [#3492](https://github.com/cockroachdb/docs/pull/3492)
-- Streamlined instructions for essential [enterprise and core backup and restore tasks](https://www.cockroachlabs.com/docs/v2.1/backup-and-restore), including a bash script for automated backups. [#3489](https://github.com/cockroachdb/docs/pull/3489)
-- Expanded the [TPC-C Performance Benchmarking](https://www.cockroachlabs.com/docs/v2.1/performance-benchmarking-with-tpc-c) tutorial to cover benchmarking large clusters. [#3281][#3281]
-- Documented the `skip` option for [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import) as well as support for decompressing input files. [#3510](https://github.com/cockroachdb/docs/pull/3510)
-- Documented the `ANALYZE`, `OPT`, and `DISTSQL` option for [`EXPLAIN`](https://www.cockroachlabs.com/docs/v2.1/explain). [#3427](https://github.com/cockroachdb/docs/pull/3427)
-- Documented how to [add a computed column to an existing table](https://www.cockroachlabs.com/docs/v2.1/computed-columns#add-a-computed-column-to-an-existing-table) and [convert a computed column into a regular column](https://www.cockroachlabs.com/docs/v2.1/computed-columns#convert-a-computed-column-into-a-regular-column). [#3501](https://github.com/cockroachdb/docs/pull/3501) [#3538](https://github.com/cockroachdb/docs/pull/3538)
-- Documented the abbreviated PostgreSQL [`INTERVAL`](https://www.cockroachlabs.com/docs/v2.1/interval) format. [#3503](https://github.com/cockroachdb/docs/pull/3503)
-- Documented the `auto_trace` [session variable](https://www.cockroachlabs.com/docs/v2.1/set-vars), which replaces the `SHOW TRACE` statement. [#3508](https://github.com/cockroachdb/docs/pull/3508)
-- Various updates to the [Information Schema](https://www.cockroachlabs.com/docs/v2.1/information-schema) documentation. [#3531](https://github.com/cockroachdb/docs/pull/3531)
-- Documented the new [default databases](https://www.cockroachlabs.com/docs/v2.1/show-databases#default-databases). [#3506](https://github.com/cockroachdb/docs/pull/3506)
-- Cleaned up the output of all `SHOW` statements; combined the `SHOW CREATE TABLE`, `SHOW CREATE VIEW`, and `SHOW CREATE SEQUENCE` pages into a single [`SHOW
-CREATE`](https://www.cockroachlabs.com/docs/v2.1/show-create) page; and removed the experimental status from [`SHOW CONSTRAINTS`](https://www.cockroachlabs.com/docs/v2.1/show-constraints). [#3523](https://github.com/cockroachdb/docs/pull/3523)
-- Documented the [`cockroach demo`](https://www.cockroachlabs.com/docs/v2.1/cockroach-demo) command. [#3509](https://github.com/cockroachdb/docs/pull/3509)
-- Various updates to the [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.1/use-the-built-in-sql-client) documentation. [#3499](https://github.com/cockroachdb/docs/pull/3499)
-
-
-
-
Contributors
-
-This release includes 493 merged PRs by 39 authors.
-We would like to thank the following contributors from the CockroachDB community:
-
-- Constantine Peresypkin
-- Garvit Juniwal
-- Joseph Lowinske (first-time contributor, CockroachDB team member)
-- Song Hao
-- Takuya Kuwahara
-- Tim O'Brien (first-time contributor, CockroachDB team member)
-- neeral
-
-
-
-- CockroachDB now hides more information from the statement statistics in [diagnostics reporting](https://www.cockroachlabs.com/docs/v2.1/diagnostics-reporting). [#28906][#28906] {% comment %}doc{% endcomment %}
-- CockroachDB now preserves the distinction between different column types for string values like in PostgreSQL, for compatibility with 3rd party tools and ORMs. [#29006][#29006] {% comment %}doc{% endcomment %}
-- The [`SET CLUSTER SETTING`](https://www.cockroachlabs.com/docs/v2.1/set-cluster-setting) statement can no longer be used inside a [transaction](https://www.cockroachlabs.com/docs/v2.1/transactions). It also now attempts to wait until the change has been gossiped before allowing subsequent statements. [#29082][#29082] {% comment %}doc{% endcomment %}
-- The [`ALTER TABLE ... SPLIT AT`](https://www.cockroachlabs.com/docs/v2.1/split-at) statement now produces an error if executed while the merge queue is enabled, as the merge queue is likely to immediately discard any splits created by the statement. [#29082][#29082] {% comment %}doc{% endcomment %}
-
-
Command-line changes
-
-- Improved the error message printed when [`cockroach quit`](https://www.cockroachlabs.com/docs/v2.1/stop-a-node) is run on a node that has not yet been initialized. [#29152][#29152]
-- The [`cockroach start`](https://www.cockroachlabs.com/docs/v2.1/start-a-node) command now emits the PID of the server process to the file specified by the `--pid-file` flag as soon as it is ready to accept network connections but possibly before it is done bootstrapping (i.e., before [`cockroach init`](https://www.cockroachlabs.com/docs/v2.1/initialize-a-cluster) completes). To wait for SQL readiness, use the `--listen-url-file` flag instead. [#29160][#29160] {% comment %}doc{% endcomment %}
-
-
Bug fixes
-
-- CockroachDB now populates the `data_type` column of `information_schema.columns` like PostgreSQL, for compatibility with 3rd party tools and ORMs. [#29006][#29006] {% comment %}doc{% endcomment %}
-- The [`cockroach dump`](https://www.cockroachlabs.com/docs/v2.1/sql-dump) command can once again operate across multiple CockroachDB versions. [#29006][#29006]
-- CockroachDB now distinguishes `CHAR` and `VARCHAR`, as mandated by the SQL standard and PostgreSQL compatibility. When a width is not specified (e.g., `CHAR(3)`), the maximum width of `VARCHAR` remains unconstrained whereas the maximum width of `CHAR` is 1 character. [#29006][#29006] {% comment %}doc{% endcomment %}
-- CockroachDB now properly checks the width of strings inserted in a [collated string](https://www.cockroachlabs.com/docs/v2.1/collate) column with a specified width. [#29006][#29006]
-- Improve the handling of jobs run prior to a [cluster upgrade](https://www.cockroachlabs.com/docs/v2.1/upgrade-cockroach-version). [#29019][#29019]
-- CockroachDB once again prefers using an IPv4 listen address if a hostname with both IPv4 and IPv6 addresses is provided to `--host`/`--listen-addr`/`--advertise-addr`. [#29158][#29158]
-- Fixed a memory leak when contended queries time out. [#29099][#29099]
-- When the `--background` flag is specified, the [`cockroach start`](https://www.cockroachlabs.com/docs/v2.1/start-a-node) command now avoids printing messages to standard output after it has detached to the background. [#29160][#29160]
-
-
-
-- CockroachDB no longer checks key usage attributes in [security certificates](https://www.cockroachlabs.com/docs/v2.1/create-security-certificates). [#29223][#29223]
-
-
SQL language changes
-
-- In a mixed-version cluster, nodes running v2.0 no longer schedule distributed SQL work on nodes running v2.1. [#29168][#29168]
-- When [`TRUNCATE`](https://www.cockroachlabs.com/docs/v2.1/truncate) or [`DROP TABLE`](https://www.cockroachlabs.com/docs/v2.1/drop-table) is run while a [schema change](https://www.cockroachlabs.com/docs/v2.1/online-schema-changes) like `CREATE INDEX` is being processed, the schema change job no longer runs indefinitely. [#29262][#29262]
-- [View](https://www.cockroachlabs.com/docs/v2.1/create-view) and [table](https://www.cockroachlabs.com/docs/v2.1/create-table) names are now recycled quickly after [`DROP VIEW`](https://www.cockroachlabs.com/docs/v2.1/drop-view) and [`DROP TABLE`](https://www.cockroachlabs.com/docs/v2.1/drop-table).
-
-
Command-line changes
-
-- The new `cockroach workload` command provides various generators for data and query loads. [#28978][#28978]
-- The `csv` and `tsv` formats for `cockroach` commands that output result rows now buffer data for a maximum of 5 seconds. This makes it possible to, for example, view SQL [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v2.1/create-changefeed) interactively with [`cockroach sql`](https://www.cockroachlabs.com/docs/v2.1/use-the-built-in-sql-client) and [`cockroach demo`](https://www.cockroachlabs.com/docs/v2.1/cockroach-demo). [#29445][#29445]
-
-
Bug fixes
-
-- Fixed support for the `--http-host` flag, which was broken in previous 2.1 beta releases. [#29220][#29220]
-- Reduced the duration of partitions in the gossip network when a node crashes to eliminate a cause of temporary data unavailability. [#29317][#29317]
-- The `unnest` and `_pg_expandarray` [functions](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators) now return an error when called with NULL as the first argument. [#29385][#29385]
-- Fixed a crash caused by JSON values and operations that use [arrays](https://www.cockroachlabs.com/docs/v2.1/array). [#29432][#29432]
-- Fixed a rare crash with the message `no err but aborted txn proto`. [#29456][#29456]
-- Fixed a crash caused by SQL statements containing `->(NULL::STRING)`. [#29414][#29414]
-- Fixed table descriptor corruption when [`TRUNCATE`](https://www.cockroachlabs.com/docs/v2.1/truncate) is run while [`DROP COLUMN`](https://www.cockroachlabs.com/docs/v2.1/drop-column) is being processed. [#29262][#29262]
-
-
Doc updates
-
-- Updated the [Data Replication](https://www.cockroachlabs.com/docs/v2.1/demo-data-replication) tutorial and the [Production Checklist](https://www.cockroachlabs.com/docs/v2.1/recommended-production-settings) to emphasize the importance of manually increasing the replication factor for important internal data when doing so for the `.default` replication zone. [#3702](https://github.com/cockroachdb/docs/pull/3702)
-
-
-
-- [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v2.1/change-data-capture) created with previous betas and alphas will not work with this version. [#29559][#29559]
-- The experimental, non-recommended `kv.allocator.stat_based_rebalancing.enabled` and `kv.allocator.stat_rebalance_threshold` [cluster settings](https://www.cockroachlabs.com/docs/v2.1/cluster-settings) have been replaced by an improved approach to load-based rebalancing that can be controlled via the new `kv.allocator.load_based_rebalancing` [cluster setting](https://www.cockroachlabs.com/docs/v2.1/cluster-settings). By default, leases will be rebalanced within a cluster to achieve better QPS balance. [#29663][#29663]
-
-
SQL language changes
-
-- Renamed the `EXPERIMENTAL_OPT` [session setting](https://www.cockroachlabs.com/docs/v2.1/set-vars) to `OPTIMIZER`. The default value is `ON`, as before. [#29530][#29530]
-- Special characters, such as newlines, are now formatted using `octal`, instead of `hex`, for compatibility with PostgreSQL. [#29593][#29593]
-
-
Command-line changes
-
-- All `cockroach` client sub-commands (except for `cockroach workload`) now support the `--url` flag. [#29621][#29621]
-- Removed `--log-backtrace-at` and `--verbosity` flags, which were documented as being only useful by CockroachDB developers yet never actually used by CockroachDB developers. [#30092][#30092]
-
-
Admin UI changes
-
-- Long table rows now wrap, when necessary. [#29551][#29551]
-- [Diagnostics](https://www.cockroachlabs.com/docs/v2.1/diagnostics-reporting) requests are now proxied through Cockroach Labs to prevent exposing user IP addresses. [#29194][#29194]
-- Added attributes to the login form to allow LastPass to properly recognize it. [#29561][#29561]
-- Custom and regular charts now have the same width. [#30083][#30083]
-- Improved the UX of the [**Custom Chart**](https://www.cockroachlabs.com/docs/v2.1/admin-ui-custom-chart-debug-page) page, and added the ability to configure multiple independent charts. [#30118][#30118]
-- Improved the design and accessibility of tooltips. [#30115][#30115]
-- Various improvements to the [**Statements**](https://www.cockroachlabs.com/docs/v2.1/admin-ui-statements-page) pages. [#30115][#30115]
- - Simplified and cleaned-up the appearance.
- - Added statement retries.
- - Right-aligned all numeric stats.
- - Added more tooltips, including for the **By Gateway Node** table on the **Statement Details** page.
- - Improved tooltips by adding a legend detailing the parts of the bar chart.
- - Highlighted summary rows.
- - Improved table headers.
- - Reordered tables to highlight the most useful data.
- - Widened bar charts.
- - Summarized [`SET`](https://www.cockroachlabs.com/docs/v2.1/set-vars) statements.
- - When a statement fingerprint has sometimes failed, used the optimizer, or been distributed, the number of matching executions is now shown.
-
-
Bug fixes
-
-- Fixed a bug that would allow the cluster summary text in the Admin UI to overflow its space. [#29548][#29548]
-- Corrected the behavior of `INSERT INTO t DEFAULT VALUES` when there are active schema changes. [#29496][#29496]
-- Fixed a race condition in [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import) with a column that was a [collated string](https://www.cockroachlabs.com/docs/v2.1/collate). [#29386][#29386]
-- Fixed crash caused by certain kinds of [`UPSERT ... RETURNING`](https://www.cockroachlabs.com/docs/v2.1/upsert) statements on tables with active schema changes. [#29543][#29543]
-- [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v2.1/change-data-capture) now error when a watched table backfills (instead of undefined behavior). [#29559][#29559]
-- Fixed a panic that occurs when verbose logging is enabled. [#29534][#29534]
-- Fixed a panic caused by inserting values of the wrong type into columns depended on by [computed columns](https://www.cockroachlabs.com/docs/v2.1/computed-columns). [#29598][#29598]
-- Fixed an issue where, under severe load, clients were sometimes receiving [retryable errors](https://www.cockroachlabs.com/docs/v2.1/transactions#error-handling) with a non-retryable error code. [#29614][#29614]
-- The [`cockroach gen haproxy`](https://www.cockroachlabs.com/docs/v2.1/generate-cockroachdb-resources) command now recognizes nodes that specify the HTTP port number using `--http-addr` instead of `--http-port`. [#29536][#29536]
-- Fixed a panic in SQL execution. [#29669][#29669]
-- Fixed a panic caused by malformed UTF-8 SQL strings. [#29668][#29668]
-- Corrected the Postgres `oid` type returned for collated string columns. [#29674][#29674]
-- Enterprise [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v2.1/change-data-capture) now correctly skip the initial scan when started with the `cursor=` option. [#29613][#29613]
-- Hash functions with `NULL` input now return `NULL`. [#29974][#29974]
-- Prevented a very rare premature failure in [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v2.1/change-data-capture) caused by a race condition with range splits. [#30009][#30009]
-- Fixed a crash when `SELECT MIN(NULL)` was run with the [SQL optimizer](https://www.cockroachlabs.com/docs/v2.1/cost-based-optimizer) enabled. [#30014][#30014]
-- Fixed a rare crash with the message `retryable error for the wrong txn`. [#30046][#30046]
-- Fixed a bug where certain queries, like merge joins, would appear to run out of memory due to incorrect memory accounting and fail. [#30087][#30087]
-- The `string_agg()` [function](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators) can now accept a `NULL` as a delimiter. [#30076][#30076]
-
-
Performance improvements
-
-- Range replicas are now automatically rebalanced throughout the cluster to even out the amount of QPS being handled by each node. [#29663][#29663]
-- Prevented allocation when checking RPC connection health. [#30055][#30055]
-
-
Doc updates
-
-- Updated the description of [correlated subqueries](https://www.cockroachlabs.com/docs/v2.1/subqueries#correlated-subqueries). More updates coming soon. [#3714](https://github.com/cockroachdb/docs/pull/3714)
-- Update the description of [`cockroach` client connection parameters](https://www.cockroachlabs.com/docs/v2.1/connection-parameters). [#3715](https://github.com/cockroachdb/docs/pull/3715)
-- Added documentation of the `public` role, which all users belong to. [#3722](https://github.com/cockroachdb/docs/pull/3722)
-- Update the [Diagnostics Reporting](https://www.cockroachlabs.com/docs/v2.1/diagnostics-reporting) page with a summary of details reported and how to view the details yourself. [#3737](https://github.com/cockroachdb/docs/pull/3737)
-
-
-
-
Contributors
-
-This release includes 87 merged PRs by 23 authors. We would like to thank the following contributors from the CockroachDB community:
-
-- Sankt Petersbug (first-time contributor)
-
-
-
-- [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v2.1/change-data-capture) will retry, rather than abort, in certain cases when failing to emit to a sink. [#30157][#30157]
-- The new `ALTER ... CONFIGURE ZONE` statement can be used to add, modify, reset, and remove [replication zones](https://www.cockroachlabs.com/docs/v2.1/configure-replication-zones), with support for placeholders (`$1`, etc.) and for multiple executions. The new `SHOW ZONE CONFIGURATION` statement can be used to view existing replication zones. Clients should use these SQL statements instead of the `cockroach zone` sub-commands, which are now deprecated and will be removed in a future version of CockroachDB. [#30173][#30173]
-- Added the `2.0` value for both the `distsql` [session setting](https://www.cockroachlabs.com/docs/v2.1/set-vars) and the `sql.defaults.distsql` [cluster setting](https://www.cockroachlabs.com/docs/v2.1/cluster-settings), which instructs the database to use the 2.0 `auto` behavior for determining whether queries are distributed or run through the gateway node. [#30209][#30209]
-
-
Command-line changes
-
-- The various `cockroach zone` sub-commands are now deprecated and will be removed in a future version of CockroachDB. Clients should use the SQL interface instead via `SHOW ZONE CONFIGURATION` and `ALTER ... CONFIGURE ZONE`. [#30173][#30173]
-- Improved the output of [`cockroach node status`](https://www.cockroachlabs.com/docs/v2.1/view-node-details) to include separate `is_available` and `is_live` columns. [#30268][#30268]
-- The [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v2.1/debug-zip) command now also collects heap profiles that were generated and stored when there was high memory usage [#30281][#30281]
-
-
Bug fixes
-
-- The `ON DELETE CASCADE` and `ON UPDATE CASCADE` [foreign key actions](https://www.cockroachlabs.com/docs/v2.1/foreign-key#foreign-key-actions) no longer cascade through `NULL`s. [#30122][#30122]
-- Fixed the evaluation of ` IS NOT NULL` and ` IS NULL` comparison operations involving a non-null constant tuple to return `true` or `false` rather than `NULL`. [#30184][#30184]
-- Fixed the occasional improper processing of the `WITH` operand with `IMPORT`/`EXPORT`/`BACKUP`/`RESTORE` and [common table expressions](https://www.cockroachlabs.com/docs/v2.1/common-table-expressions). [#30198][#30198]
-- Fixed the return type of an array built from the results of a subquery to be `elementType[]` rather than `tuple{elementType}[]`. [#30237][#30237]
-- Fixed a panic that was occurring when the cost-based optimizer was disabled and an array built from the results of a subquery was used in the `WHERE` clause of an outer query. [#30237][#30237]
-- Fixed a panic that occurred when not all values were present in a composite foreign key. [#30153][#30153]
-- Transaction size limit errors are no longer returned for transactions that have already committed. [#30304][#30304]
-
-
Performance improvements
-
-- Avoided unnecessary allocations when parsing prepared statement placeholders. [#30299][#30299]
-- 1PC transactions now avoid writing transaction record and intents when pushed due to reads at a higher timestamp. [#30298][#30298]
-
-
-
-- Fixed a vulnerability in which TLS certificates were not validated correctly for internal RPC interfaces. This vulnerability could allow an unauthenticated user with network access to read and write to the cluster. [#30821](https://github.com/cockroachdb/cockroach/issues/30821)
-
-
SQL language changes
-
-- The entries in the `replicas` column of the `crdb_internal.ranges` virtual table are now always sorted by store
-ID.
-- The `EXPERIMENTAL_RELOCATE` statement no longer temporarily increases the number of replicas in a range more than one above the range's replication factor, preventing rare edge cases of unavailability.
-
-
Command-line changes
-
-- The `--log-dir`, `--log-dir-max-size`, `--log-file-max-size`, and `--log-file-verbosity` flags are now only available for the [`cockroach start`](https://www.cockroachlabs.com/docs/v2.1/start-a-node) and [`cockroach demo`](https://www.cockroachlabs.com/docs/v2.1/cockroach-demo) commands. Previously, these flags were available for other commands but rarely used or functional. [#30341][#30341] {% comment %}doc{% endcomment %}
-
-
Admin UI changes
-
-- The new **SQL Query Errors** graph on the [**SQL** dashboard](https://www.cockroachlabs.com/docs/v2.1/admin-ui-sql-dashboard) shows the number of queries that returned a runtime or execution error. [#30371][#30371] {% comment %}doc{% endcomment %}
-- Hovering over a truncated entry in the [**Events** panel](https://www.cockroachlabs.com/docs/v2.1/admin-ui-access-and-navigate#events-panel) now shows the full description of the event. [#30391][#30391]
-
-
Bug fixes
-
-- The [`cockroach demo`](https://www.cockroachlabs.com/docs/v2.1/cockroach-demo) command now runs with replication disabled. [#30517][#30517]
-- The [**Jobs** page](https://www.cockroachlabs.com/docs/v2.1/admin-ui-jobs-page) now sorts by **Creation Time** by default instead of by **User**. [#30428][#30428]
-- Fixed a panic in the [optimizer](https://www.cockroachlabs.com/docs/v2.1/cost-based-optimizer) code when generator functions such as `generate_series()` are used as the argument to an aggregate function. [#30362][#30362]
-- Corrected the help text for [`EXPORT`](https://www.cockroachlabs.com/docs/v2.1/export). [#30425][#30425]
-- Ignored more unsupported clauses in [`IMPORT ... PGDUMP`](https://www.cockroachlabs.com/docs/v2.1/import). [#30425][#30425]
-- Fixed [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import) of empty or small tables under rare conditions. [#30425][#30425]
-- Fixed a panic when generator functions such as `unnest()` are used in the [`SELECT`](https://www.cockroachlabs.com/docs/v2.1/select-clause) list
- with `GROUP BY`. [#30462][#30462]
-- Fixed a panic caused by columns being reordered when using [`UPSERT`](https://www.cockroachlabs.com/docs/v2.1/upsert) with a `RETURNING` clause. [#30467][#30467]
-- Fixed a panic when a [correlated subquery](https://www.cockroachlabs.com/docs/v2.1/subqueries#correlated-subqueries) in the `WHERE` clause contains an aggregate function referencing the outer query. This now causes an error since aggregates are not allowed in `WHERE`. [#30522][#30522]
-- Corrected the list of permitted values printed when a non-permitted value is set for the `distsql` [session variable](https://www.cockroachlabs.com/docs/v2.1/set-vars). [#30631][#30631]
-
-
Performance improvements
-
-- Removed unnecessary synchronous disk writes caused by erroneous logic in the Raft implementation. [#30459][#30459]
-- Range replicas are now automatically rebalanced throughout the cluster to even out the amount of QPS being handled by each node by default. Previously, this was available as a cluster setting but was not the default behavior. [#30649][#30649] {% comment %}doc{% endcomment %}
-
-
-
-- `EXECUTE` is no longer an [explainable statement](https://www.cockroachlabs.com/docs/v2.1/explain). As an alternative, it is possible to `PREPARE ... AS EXPLAIN ...` and then execute the prepared statement to see the plan for a prepared query. [#30725][#30725]
-
-
Admin UI changes
-
-- Removed read and write graphs from the [Hardware Dashboard](https://www.cockroachlabs.com/docs/v2.1/admin-ui-hardware-dashboard). [#30655][#30655]
-
-
Bug fixes
-
-- `EXPLAIN ALTER DATABASE ... RENAME` no longer renames the target database. [#30661][#30661]
-- `EXPLAIN ALTER TABLE ... RENAME` no longer renames the target table. [#30661][#30661]
-- `EXPLAIN ALTER TABLE ... RENAME COLUMN` no longer renames the target column. [#30661][#30661]
-- `EXPLAIN ALTER INDEX ... RENAME` no longer renames the target index. [#30661][#30661]
-- It is once again possible to use [`EXPLAIN`](https://www.cockroachlabs.com/docs/v2.1/explain) for all preparable statements, and prepare all explainable statements. [#30661][#30661]
-- [`TRUNCATE`](https://www.cockroachlabs.com/docs/v2.1/truncate) is now properly restricted in SQL transactions like other DDL statements. [#30661][#30661]
-- [`TRUNCATE`](https://www.cockroachlabs.com/docs/v2.1/truncate) can now be used with [`EXPLAIN`](https://www.cockroachlabs.com/docs/v2.1/explain) and as a prepared statement. [#30661][#30661]
-- The default unit for converting a string value when setting the `statement_timeout` [session variable](https://www.cockroachlabs.com/docs/v2.1/set-vars) is now milliseconds for compatibility with PostgreSQL. [#30654][#30654]
-
-
Doc updates
-
-- Added a [Migration Overview](https://www.cockroachlabs.com/docs/v2.1/migration-overview) and specific guides for [Migrating from Postgres](https://www.cockroachlabs.com/docs/v2.1/migrate-from-postgres), [Migrating from MySQL](https://www.cockroachlabs.com/docs/v2.1/migrate-from-mysql), and [Migrating from CSV](https://www.cockroachlabs.com/docs/v2.1/migrate-from-csv). [#3766](https://github.com/cockroachdb/docs/pull/3766)
-- Called out performance-optimized configuration files for [Kubernetes single-cluster deployments](https://www.cockroachlabs.com/docs/v2.1/orchestrate-cockroachdb-with-kubernetes). [#3827](https://github.com/cockroachdb/docs/pull/3827) [#3838](https://github.com/cockroachdb/docs/pull/3838)
-- Documented [how replication zones affect secondary indexes](https://www.cockroachlabs.com/docs/v2.1/configure-replication-zones#replication-zone-levels). [#3818](https://github.com/cockroachdb/docs/pull/3818)
-- Clarified that [per-replica constraints in replication zones](https://www.cockroachlabs.com/docs/v2.1/configure-replication-zones#scope-of-constraints) do not need to add up to total replicas. [#3812](https://github.com/cockroachdb/docs/pull/3812)
-- Clarified a [known limitation about schema changes inside transactions](https://www.cockroachlabs.com/docs/v2.1/known-limitations#schema-changes-within-transactions). [#3814](https://github.com/cockroachdb/docs/pull/3814)
-- Updated the [`ARRAY`](https://www.cockroachlabs.com/docs/v2.1/array) documentation to cover casting from array to `STRING` values. [#3813](https://github.com/cockroachdb/docs/pull/3813)
-- Documented the use of `--locality` when using [`cockroach gen haproxy`](https://www.cockroachlabs.com/docs/v2.1/generate-cockroachdb-resources#haproxy) to generate an HAProxy config file. [#3809](https://github.com/cockroachdb/docs/pull/3809)
-- Updated the [session variables](https://www.cockroachlabs.com/docs/v2.1/set-vars) documentation. [#3799](https://github.com/cockroachdb/docs/pull/3799)
-- Updated the list of information included in a [`debug zip`](https://www.cockroachlabs.com/docs/v2.1/debug-zip). [#3796](https://github.com/cockroachdb/docs/pull/3796)
-
-
-
-- The output of [`SHOW JOBS`](https://www.cockroachlabs.com/docs/v2.1/show-jobs) now reports ongoing jobs first in start time order, followed by completed jobs in finished time order. [#31005][#31005]
-- CockroachDB now supports more customizations from PostgreSQL client drivers when initially setting up the client connection. [#31021][#31021]
-- Columns that are part of a table's [`PRIMARY KEY`](https://www.cockroachlabs.com/docs/v2.1/primary-key) can no longer be specified as [`STORING` columns](https://www.cockroachlabs.com/docs/v2.1/create-index#store-columns) in secondary indexes on the table. [#31032][#31032]
-- The output of `SHOW ZONE CONFIGURATIONS` and `SHOW ZONE CONFIGURATION FOR` now only shows the zone name and the SQL representation of the config. [#31089][#31089]
-
-
Command-line changes
-
-- It is now possible to provide initial/default values for any customizable [session variable](https://www.cockroachlabs.com/docs/v2.1/set-vars) in the client connection URL. [#31021][#31021]
-
-
Admin UI changes
-
-- Leveraged PopperJS positioning engine to automate the positioning of tooltips. [#30476][#30476]
-- Added a graph of the average QPS per store to the [**Replication** dashboard](https://www.cockroachlabs.com/docs/v2.1/admin-ui-replication-dashboard). Note that this uses an exponentially weighted moving average, not an instantaneous measurement. It is primarily of interest because it's the data that's used when making load-based rebalancing decisions. [#30889][#30889]
-- Added a bar chart to the memory and capacity usage columns on the [**Node List**](https://www.cockroachlabs.com/docs/v2.1/admin-ui-cluster-overview-page#node-list). These columns sort by percentage used. [#31070][#31070]
-- Added a debug page with a form that lets users manually enqueue a range in one of the various store-level replica queues on a specified store. This feature is intended for advanced users only. [#31092][#31092]
-
-
Bug fixes
-
-- Lookup joins no longer omit rows in certain circumstances during limit queries. [#30836][#30836]
-- Fixed a panic due to malformed placeholder values. [#30860][#30860]
-- The [`cockroach start`](https://www.cockroachlabs.com/docs/v2.1/start-a-node) command now prints a hint about waiting for a join or [`cockroach init`](https://www.cockroachlabs.com/docs/v2.1/initialize-a-cluster) only when starting nodes for a new cluster, not when adding nodes to an existing cluster. [#30953][#30953]
-- Fixed a possible crash when using filters with ` IN ` expressions. [#30968][#30968]
-- Prevented an edge case in load-based rebalancing where the cluster could transfer the lease for a range to a replica that isn't keeping up with the other replicas, causing brief periods where no replicas think they're leaseholder for the range and thus no requests can be processed for the range. [#30972][#30972]
-- CockroachDB now properly ignores non-alphanumeric characters in encoding names passed to [functions](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators) like `convert_from()` and `client_encoding()`, for compatibility with PostgreSQL. [#31021][#31021]
-- CockroachDB now properly recognizes the value of `extra_float_digits` provided by clients as a [connection parameter](https://www.cockroachlabs.com/docs/v2.1/connection-parameters). [#31021][#31021]
-- CockroachDB now properly recognizes two-part values for the `DateStyle` [session variable](https://www.cockroachlabs.com/docs/v2.1/set-vars) and [connection parameter](https://www.cockroachlabs.com/docs/v2.1/connection-parameters), for compatibility with PostgreSQL. [#31021][#31021]
-- CockroachDB now reports all server status parameters supported by PostgreSQL when setting up a session. This is expected to improve compatibility with some drivers. [#31021][#31021]
-- CockroachDB now properly uses the client-provided default values when using the [`RESET`](https://www.cockroachlabs.com/docs/v2.1/reset-vars) statement (or `SET ... = DEFAULT`). [#31021][#31021]
-- CockroachDB now properly fills the columns `boot_val` and `reset_val` in `pg_catalog.pg_settings`, for better compatibility with PostgreSQL. [#31021][#31021]
-- CockroachDB now properly supports renaming a column that's also stored in an index. [#31074][#31074]
-- During password login, "user does not exist" and "invalid password" cases now produce the same error message. [#30935][#30935]
-
-
Performance improvements
-
-- CockroachDB now avoids acquiring an exclusive lock when checking replica status in the write proposal path. [#30920][#30920]
-
-
Doc updates
-
-- Added a tutorial demonstrating essential [performance tuning tutorial](https://www.cockroachlabs.com/docs/v2.1/performance-tuning) techniques for getting fast reads and writes in CockroachDB, starting with a single-region deployment and expanding into multiple regions. [#3854](https://github.com/cockroachdb/docs/pull/3854)
-- Added a tutorial demonstrating the importances of [serializable transactions](https://www.cockroachlabs.com/docs/v2.1/demo-serializable). [#3844](https://github.com/cockroachdb/docs/pull/3844)
-- Added documentation on [index name resolution](https://www.cockroachlabs.com/docs/v2.1/sql-name-resolution#index-name-resolution). [#3830](https://github.com/cockroachdb/docs/pull/3830).
-- Updated the documentation on [set-returning functions (SRFs)](https://www.cockroachlabs.com/docs/v2.1/table-expressions#table-generator-functions). [#3810](https://github.com/cockroachdb/docs/pull/3810)
-- Update the example on how [auto-incrementing is not always sequential](https://www.cockroachlabs.com/docs/v2.1/serial#auto-incrementing-is-not-always-sequential). [#3832](https://github.com/cockroachdb/docs/pull/3832)
-
-
-
-- [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v2.1/change-data-capture) can now be configured with a minimum duration between emitted resolved timestamps. [#31008][#31008]
-- [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v2.1/change-data-capture) now have limited and experimental support for the `AVRO` format. [#31143][#31143]
-- [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v2.1/change-data-capture) now continue running when watched tables are [`ALTER`ed](https://www.cockroachlabs.com/docs/v2.1/alter-table) in ways that require a backfill. [#31165][#31165]
-
-
SQL language changes
-
-- [`EXPLAIN`](https://www.cockroachlabs.com/docs/v2.1/explain) now always shows filter and join conditions. [#31186][#31186]
-- CockroachDB now supports CTEs inside [views](https://www.cockroachlabs.com/docs/v2.1/views). [#31051][#31051]
-- CockroachDB now hints that internal errors should be [reported as bugs by users](https://www.cockroachlabs.com/docs/v2.1/file-an-issue). Additionally, internal errors are now collected internally and submitted (anonymized) with other node statistics when statistic collection is enabled. [#31272][#31272]
-- It is now possible to force a specific index for `DELETE` or `UPDATE`. [#31279][#31279]
-- Handle binary fields dumped by `mysqldump v5.7.23` with `_binary` prefix. [#31305][#31305]
-- `EXPLAIN ANALYZE ` is now a valid equivalent of [`EXPLAIN ANALYZE (DISTSQL) `](https://www.cockroachlabs.com/docs/v2.1/explain-analyze) [#31278][#31278]
-- When a query references a table in [`information_schema`](https://www.cockroachlabs.com/docs/v2.1/information-schema) and `pg_catalog` that is not yet implemented, this will be reported as telemetry if statistics reporting is enabled. This will help determine which features should be implemented next for compatibility. [#31357][#31357]
-
-
Admin UI changes
-
-- The **Service latency: {90,99}th percentile** graphs on the [**Overview**](https://www.cockroachlabs.com/docs/v2.1/admin-ui-overview-dashboard) and [**SQL**](https://www.cockroachlabs.com/docs/v2.1/admin-ui-sql-dashboard) dashboards, as well as the P50 and P99 latency numbers in the time series area sidebar, now reflect latencies of both local and distributed queries. Previously, they only included local queries. [#31116][#31116]
-- Links to documentation pages now open in a new tab. [#31132][#31132]
-- Improved the view of databases with no tables. [#31231][#31231]
-- Updated [**Jobs**](https://www.cockroachlabs.com/docs/v2.1/admin-ui-jobs-page) dashboard to make each row expandable, allowing the user to see the error message for failed jobs. [#31237][#31237]
-
-
Bug fixes
-
-- Fixed schema change rollback caused by GC TTL threshold error. [#31153][#31153]
-- Fixed the `_admin/v1/enqueue_range` debug endpoint to always respect its `node_id` parameter. [#31087][#31087]
-- CockroachDB now reports an unimplemented error when a common table expression containing [`INSERT`](https://www.cockroachlabs.com/docs/v2.1/insert)/[`UPDATE`](https://www.cockroachlabs.com/docs/v2.1/update)/[`UPSERT`](https://www.cockroachlabs.com/docs/v2.1/upsert)/[`DELETE`](https://www.cockroachlabs.com/docs/v2.1/delete) is not otherwise used in the remainder of the query. [#31051][#31051]
-- CockroachDB does not silently ignore `WITH` clauses within parentheses anymore. [#31051][#31051]
-- Fixed a rare scenario where a [backup](https://www.cockroachlabs.com/docs/v2.1/backup) could incorrectly include a key for an aborted transaction. [#31316][#31316]
-- CockroachDB now avoids repeatedly trying a replica that was found to be in the process of being added. [#31250][#31250]
-- CockroachDB will no longer fail in unexpected ways or write invalid data when the type of input values provided to [`INSERT`](https://www.cockroachlabs.com/docs/v2.1/insert)/[`UPSERT`](https://www.cockroachlabs.com/docs/v2.1/upsert) does not match the type of the target columns. [#31280][#31280]
-- [`UPDATE`](https://www.cockroachlabs.com/docs/v2.1/update) now verifies the column constraints before [`CHECK`](https://www.cockroachlabs.com/docs/v2.1/check) constraints, for compatibility with PostgreSQL. [#31280][#31280]
-- It is no longer possible to use not-fully-added-yet columns in the `RETURNING` clause of [`UPDATE`](https://www.cockroachlabs.com/docs/v2.1/update) statements. [#31280][#31280]
-- CockroachDB no longer (incorrectly and silently) accepts a [computed column](https://www.cockroachlabs.com/docs/v2.1/computed-columns) on the left side of the assignment in an [`ON CONFLICT`](https://www.cockroachlabs.com/docs/v2.1/insert#on-conflict-clause) clause. [#31280][#31280]
-- CockroachDB no longer (incorrectly and silently) accepts a not-fully-added-yet column on the left side of the assignment in an [`ON CONFLICT`](https://www.cockroachlabs.com/docs/v2.1/insert#on-conflict-clause) clause. [#31280][#31280]
-- CockroachDB no longer (incorrectly and silently) ignores the `HAVING` clause on [`SELECT`](https://www.cockroachlabs.com/docs/v2.1/select-clause) without `FROM`. [#31347][#31347]
-- The **Range Debug** page now handle cases in which there is no lease start or expiration time. [#31367][#31367]
-
-
Build changes
-
-- CockroachDB can now be built from source on macOS 10.14 (Mojave). [#31308][#31308]
-
-
Doc Updates
-
-- Updated the documentation for [encryption at rest](https://www.cockroachlabs.com/docs/v2.1/encryption). [#3848](https://github.com/cockroachdb/docs/pull/3848)
-- Updated the documentation on how to [orchestrate CockroachDB across multiple Kubernetes clusters](https://www.cockroachlabs.com/docs/v2.1/orchestrate-cockroachdb-with-kubernetes-multi-cluster). [#3845](https://github.com/cockroachdb/docs/pull/3845) [#3847](https://github.com/cockroachdb/docs/pull/3847)
-- Updated the documentation on the [cost-based optimizer](https://www.cockroachlabs.com/docs/v2.1/cost-based-optimizer). [#3784](https://github.com/cockroachdb/docs/pull/3784)
-- Added documentation for [fast path deletes for interleaved tables](https://www.cockroachlabs.com/docs/v2.1/interleave-in-parent). [#3834](https://github.com/cockroachdb/docs/pull/3834)
-
-
-
-- Fixed a panic when setting some `kv.bulk_io_write` [cluster settings](https://www.cockroachlabs.com/docs/v2.1/cluster-settings) to a value < 1. [#31603][#31603]
-- Fixed a bug where entry application on Raft followers could fall behind entry application on the leader, causing stalls during splits. [#31619][#31619]
-- Fixed a panic caused by an incorrect assumption in the SQL optimizer code that `ROWS FROM` clauses contain only functions. [#31769][#31769]
-- Fixed a bug causing committed read-only transactions to be counted as aborted in metrics. [#31608][#31608]
-- Fixed a bug where [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v2.1/create-changefeed) may not correctly retry temporary errors when communicating with a sink. [#31559][#31559]
-
-
-
-Release Date: {{ include.release_date | date: "%B %-d, %Y" }}
-
-With the release of CockroachDB v2.1, we’ve made it easier than ever to migrate from MySQL and Postgres, improved our scalability on transactional workloads by 5x, enhanced our troubleshooting workflows in the Admin UI, and launched a managed offering to help teams deploy low-latency, multi-region clusters with minimal operator overhead.
-
-- Check out a [summary of the most significant user-facing changes](#v2-1-0-summary).
-- Then [upgrade to CockroachDB v2.1](https://www.cockroachlabs.com/docs/v2.1/upgrade-cockroach-version).
-
-
Summary
-
-This section summarizes the most significant user-facing changes in v2.1.0. For a complete list of features and changes, including bug fixes and performance improvements, see the [release notes]({% link releases/index.md %}#testing-releases) for previous testing releases.
-
-- [Managed Offering](#v2-1-0-managed-offering)
-- [Enterprise Features](#v2-1-0-enterprise-features)
-- [Core Features](#v2-1-0-core-features)
-- [Known Limitations](#v2-1-0-known-limitations)
-- [Documentation](#v2-1-0-documentation)
-
-
-
-
Managed Offering
-
-The Managed CockroachDB offering is currently in Limited Availability and accepting customers on a qualified basis. The offering provides a running CockroachDB cluster suitable to your needs, fully managed by Cockroach Labs on GCP or AWS. Benefits include:
-
-- No provisioning or deployment efforts for you
-- Daily full backups and hourly incremental backups of your data
-- Upgrades to the latest stable release of CockroachDB
-- Monitoring to provide SLA-level support
-
-For more details, see the [Managed CockroachDB](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart) docs.
-
-
Enterprise Features
-
-These new features require an [enterprise license](https://www.cockroachlabs.com/docs/v2.1/enterprise-licensing). Register for a 30-day trial license [here](https://www.cockroachlabs.com/get-cockroachdb/enterprise/).
-
-Feature | Description
---------|------------
-[Change Data Capture](https://www.cockroachlabs.com/docs/v2.1/change-data-capture) (Beta)| Change data capture (CDC) provides efficient, distributed, row-level change feeds into Apache Kafka for downstream processing such as reporting, caching, or full-text indexing. Use the [`CREATE CHANGEFEED`](https://www.cockroachlabs.com/docs/v2.1/create-changefeed) statement to create a new changefeed, which provides row-level change subscriptions.
-[Encryption at Rest](https://www.cockroachlabs.com/docs/v2.1/encryption) (Experimental) | Encryption at Rest provides transparent encryption of a node's data on the local disk.
-[`EXPORT`](https://www.cockroachlabs.com/docs/v2.1/export) (Beta)| The `EXPORT` statement exports tabular data or the results of arbitrary SELECT statements to CSV files. Using the CockroachDB [distributed execution engine](https://www.cockroachlabs.com/docs/v2.1/architecture/sql-layer#distsql), `EXPORT` parallelizes CSV creation across all nodes in the cluster, making it possible to quickly get large sets of data out of CockroachDB in a format that can be ingested by downstream systems.
-
-
Core Features
-
-These new features are freely available in the core version and do not require an enterprise license.
-
-
SQL
-
-Feature | Description
---------|------------
-[`ALTER TABLE ... ALTER TYPE`](https://www.cockroachlabs.com/docs/v2.1/alter-type) | The `ALTER TABLE ... ALTER TYPE` statement changes a column's [data type](https://www.cockroachlabs.com/docs/v2.1/data-types). Only type changes that neither require data checks nor data conversion are supported at this time.
-[`ALTER COLUMN ... DROP STORED`](https://www.cockroachlabs.com/docs/v2.1/alter-column#convert-a-computed-column-into-a-regular-column) | The `ALTER TABLE ... ALTER COLUMN ... DROP STORED` statement converts a stored, [computed column](https://www.cockroachlabs.com/docs/v2.1/computed-columns) into a regular column.
-[`CANCEL JOB`](https://www.cockroachlabs.com/docs/v2.1/cancel-job) | The `CANCEL JOB` statement can now be executed on long-running schema change jobs, causing them to terminate early and roll back. Also, the `CANCEL JOBS` variant of the statement lets you cancel multiple jobs at once.
-[`CANCEL QUERIES`](https://www.cockroachlabs.com/docs/v2.1/cancel-query) | The `CANCEL QUERIES` variant of the `CANCEL QUERY` statement lets you cancel multiple queries at once.
-[`CANCEL SESSIONS`](https://www.cockroachlabs.com/docs/v2.1/cancel-session) | The `CANCEL SESSIONS` variant of the `CANCEL SESSION` statement lets you stop multiple long-running sessions. `CANCEL SESSION` will attempt to cancel the currently active query and end the session.
-[Cost-Based Optimizer](https://www.cockroachlabs.com/docs/v2.1/cost-based-optimizer) | The cost-based optimizer seeks the lowest cost for a query, usually related to time. In versions 2.1 and later, CockroachDB's cost-based optimizer is enabled by default.
-[`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v2.1/create-statistics) (Experimental) | The `CREATE STATISTICS` statement generates table statistics for the [cost-based optimizer](https://www.cockroachlabs.com/docs/v2.1/cost-based-optimizer) to use.
-[`EXPLAIN (DISTSQL)`](https://www.cockroachlabs.com/docs/v2.1/explain#distsql-option) | The `DISTSQL` option generates a physical query plan for a query. Query plans provide information around SQL execution, which can be used to troubleshoot slow queries.
-[`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v2.1/explain-analyze) | The `EXPLAIN ANALYZE` statement executes a SQL query and returns a physical query plan with execution statistics.
-[Fast Deletes for Interleaved Tables](https://www.cockroachlabs.com/docs/v2.1/interleave-in-parent#benefits) | Under certain conditions, deleting rows from interleave tables that use [`ON DELETE CASCADE`](https://www.cockroachlabs.com/docs/v2.1/add-constraint#add-the-foreign-key-constraint-with-cascade) will use an optimized code path and run much faster.
-[Lookup Joins](https://www.cockroachlabs.com/docs/v2.1/joins#lookup-joins) (Experimental) | A lookup join is beneficial to use when there is a large imbalance in size between the two tables, as it only reads the smaller table and then looks up matches in the larger table. A lookup join requires that the right-hand (i.e., larger) table is indexed on the equality column.
-[`public` Role](https://www.cockroachlabs.com/docs/v2.1/authorization#create-and-manage-roles) | All users now belong to the `public` role, to which you can [grant](https://www.cockroachlabs.com/docs/v2.1/grant) and [revoke](https://www.cockroachlabs.com/docs/v2.1/revoke) privileges.
-[`SET` (session variable)](https://www.cockroachlabs.com/docs/v2.1/set-vars) [`SHOW` (session variable)](https://www.cockroachlabs.com/docs/v2.1/show-vars) | Added the following options:
`statement_timeout`: The amount of time a statement can run before being stopped.
`optimizer`: The mode in which a query execution plan is generated. If set to `on`, the cost-based optimizer is enabled by default and the heuristic planner will only be used if the query is not supported by the cost-based optimizer; if set to `off`, all queries are run through the legacy heuristic planner.
-[`SHOW STATISTICS`](https://www.cockroachlabs.com/docs/v2.1/show-statistics) (Experimental) | The `SHOW STATISTICS` statement lists [table statistics](https://www.cockroachlabs.com/docs/v2.1/create-statistics) used by the [cost-based optimizer](https://www.cockroachlabs.com/docs/v2.1/cost-based-optimizer).
-[`SNAPSHOT` isolation level](https://www.cockroachlabs.com/docs/v2.1/transactions#isolation-levels) | **Removed.** Transactions that request to use `SNAPSHOT` are now mapped to [`SERIALIZABLE`](https://www.cockroachlabs.com/docs/v2.1/demo-serializable).
-[Subquery Support](https://www.cockroachlabs.com/docs/v2.1/subqueries#correlated-subqueries) | CockroachDB's [cost-based optimizer](https://www.cockroachlabs.com/docs/v2.1/cost-based-optimizer) supports several common types of correlated subqueries. A subquery is said to be "correlated" when it uses table or column names defined in the surrounding query.
-
-
CLI
-
-Feature | Description
---------|------------
-[`cockroach demo`](https://www.cockroachlabs.com/docs/v2.1/cockroach-demo) | The `cockroach demo` command starts a temporary, in-memory, single-node CockroachDB cluster and opens an [interactive SQL shell](https://www.cockroachlabs.com/docs/v2.1/use-the-built-in-sql-client) to it.
-[`cockroach start`](https://www.cockroachlabs.com/docs/v2.1/start-a-node) | The new `--advertise-addr` flag recognizes both a hostname/address and port and replaces the `--advertise-host` and `--advertise-port` flags, which are now deprecated.
The new `--listen-addr` flag recognizes both a hostname/address and port and replaces the `--host` and `--port` flags, which are now deprecated for `cockroach start` but remain valid for other client commands.
The new `--http-addr` flag recognizes both a hostname/address and port and replaces the `--http-host` flag, which is now deprecated.
-[`cockroach sql`](https://www.cockroachlabs.com/docs/v2.1/use-the-built-in-sql-client) | The `cockroach sql` command and other client commands that display SQL results now use the new table result formatter by default, replacing the previous formatter called `pretty`. This provides more compact and more reusable results.
-`cockroach zone` | **Deprecated.** The `cockroach zone` command has been deprecated. To manage [replication zones](https://www.cockroachlabs.com/docs/v2.1/configure-replication-zones), use the [`CONFIGURE ZONE`](https://www.cockroachlabs.com/docs/v2.1/configure-zone) statement to [add](https://www.cockroachlabs.com/docs/v2.1/configure-replication-zones#create-a-replication-zone-for-a-system-range), [modify](https://www.cockroachlabs.com/docs/v2.1/configure-replication-zones#edit-the-default-replication-zone), [reset](https://www.cockroachlabs.com/docs/v2.1/configure-replication-zones#reset-a-replication-zone), and [remove](https://www.cockroachlabs.com/docs/v2.1/configure-replication-zones#remove-a-replication-zone) replication zones.
-
-
Operations
-
-Feature | Description
---------|------------
-[Controlling Leaseholder Location](https://www.cockroachlabs.com/docs/v2.1/configure-replication-zones#constrain-leaseholders-to-specific-datacenters) | Using replication zones, you can now specify preferences for where a range's leaseholders should be placed to increase performance in some scenarios.
-[DBeaver Support](https://www.cockroachlabs.com/docs/v2.1/third-party-database-tools) | DBeaver, a cross-platform database GUI, has been thoroughly vetted and tested with CockroachDB v2.1.
-[Load-based Rebalancing](https://www.cockroachlabs.com/docs/v2.1/architecture/replication-layer#membership-changes-rebalance-repair) | In addition to the rebalancing that occurs when nodes join or leave a cluster, leases and replicas are rebalanced automatically based on the relative load across the nodes within a cluster. Note that depending on the needs of your deployment, you can exercise additional control over the location of leases and replicas by [configuring replication zones](https://www.cockroachlabs.com/docs/v2.1/configure-replication-zones).
-[Migration from Postgres and MySQL](https://www.cockroachlabs.com/docs/v2.1/migration-overview) | The `IMPORT` command now supports importing dump files from Postgres and MySQL.
-[Monitoring Kubernetes Deployments](https://www.cockroachlabs.com/docs/v2.1/orchestrate-cockroachdb-with-kubernetes) | Kubernetes tutorials now feature steps on how to integrate with[Prometheus](https://prometheus.io/), an open source tool for storing, aggregating, and querying timeseries data, and set up [Alertmanager](https://prometheus.io/docs/alerting/alertmanager/).
-[Multi-Cluster Kubernetes Deployments](https://www.cockroachlabs.com/docs/v2.1/orchestrate-cockroachdb-with-kubernetes-multi-cluster) | You can now orchestrate a secure CockroachDB deployment across three Kubernetes clusters, each in a different geographic region, using the StatefulSet feature to manage the containers within each cluster and linking them together via DNS.
-[Pipelining of Transactional Writes](https://www.cockroachlabs.com/docs/v2.1/architecture/transaction-layer#transaction-pipelining) | Transactional writes are pipelined when being replicated and when being written to disk, dramatically reducing the latency of transactions that perform multiple writes.
-[Preferring Local Networks](https://www.cockroachlabs.com/docs/v2.1/start-a-node)| The new `--locality-advertise-addr` flag on [`cockroach start`](https://www.cockroachlabs.com/docs/v2.1/start-a-node#networking) can be used to tell nodes in specific localities to prefer local or private interfaces. This flag is useful when running a cluster across multiple networks, where nodes in a given network have access to a private or local interface while nodes outside the network do not.
-[Rolling Upgrade Auto-finalization](https://www.cockroachlabs.com/docs/v2.1/upgrade-cockroach-version) | By default, as soon as all nodes are running CockroachDB v2.1, the upgrade process will be **auto-finalized**. This will enable certain performance improvements and bug fixes introduced in the new version.
-[Viewing Node Status for an Unavailable Cluster](https://www.cockroachlabs.com/docs/v2.1/view-node-details) | The `cockroach node status` command can now be run even when majority of nodes are down. Running the command now shows an additional field: `is_available.`
-
-
Admin UI
-
-Feature | Description
---------|------------
-[Advanced Debugging Page](https://www.cockroachlabs.com/docs/v2.1/admin-ui-debug-pages) (Experimental) | The **Advanced Debugging** page provides links to advanced monitoring and troubleshooting reports and cluster configuration details.
-[Hardware Dashboard](https://www.cockroachlabs.com/docs/v2.1/admin-ui-hardware-dashboard) | The **Hardware** dashboard lets you monitor CPU usage, disk throughput, network traffic, storage capacity, and memory.
-[Statements page](https://www.cockroachlabs.com/docs/v2.1/admin-ui-statements-page) | The **Statements** page helps you identify frequently executed or high latency SQL statements. It also allows you to view the details of SQL statement fingerprints, which are groupings of similar SQL statements with literal values replaced by underscores.
-[User Authentication](https://www.cockroachlabs.com/docs/v2.1/admin-ui-access-and-navigate) | As of v2.1, users must have a [username and password](https://www.cockroachlabs.com/docs/v2.1/create-user) to access the Admin UI in a secure cluster.
-
-
Known Limitations
-
-For information about limitations we've identified in CockroachDB v2.1, with suggested workarounds where applicable, see [Known Limitations](https://www.cockroachlabs.com/docs/v2.1/known-limitations).
-
-
Documentation
-
-Topic | Description
-------|------------
-[Experimental Features](https://www.cockroachlabs.com/docs/v2.1/experimental-features) | This new page lists the experimental features that are available in CockroachDB v2.1.
-[Client Connection Parameters](https://www.cockroachlabs.com/docs/v2.1/connection-parameters) | This new page describes the parameters used to establish a client connection. The client connection parameters determine which CockroachDB cluster they connect to, and how to establish this network connection.
-[Deploying CockroachDB with `systemd`](https://www.cockroachlabs.com/docs/v2.1/manual-deployment) | The on-premises and cloud deployment tutorials now include instructions for using `systemd` to start the nodes of a cluster.
-[Manual and Automated Backups](https://www.cockroachlabs.com/docs/v2.1/backup-and-restore#automated-full-and-incremental-backups) | This page has been updated to provide both manual and automated backup guidance.
-[Migration Guide](https://www.cockroachlabs.com/docs/v2.1/migration-overview) | This new guide provides an [overview of migrating to CockroachDB](https://www.cockroachlabs.com/docs/v2.1/migration-overview), as well as specific instructions for [migrating from Postgres](https://www.cockroachlabs.com/docs/v2.1/migrate-from-postgres), [migrating from MySQL](https://www.cockroachlabs.com/docs/v2.1/migrate-from-mysql), and [migrating from CSV](https://www.cockroachlabs.com/docs/v2.1/migrate-from-csv).
-[Networking Guidance](https://www.cockroachlabs.com/docs/v2.1/recommended-production-settings#networking) | The Production Checklist now provides a detailed explanation of network flags and scenarios.
-[Online Schema Changes](https://www.cockroachlabs.com/docs/v2.1/online-schema-changes) | This new page explains how CockroachDB updates table schema without imposing any downtown or negative consequences on applications.
-[Performance Benchmarking](https://www.cockroachlabs.com/docs/v2.1/performance-benchmarking-with-tpc-c) | This page walks you through [TPC-C](http://www.tpc.org/tpcc/) performance benchmarking on CockroachDB. It measures tpmC (new order transactions/minute) on two TPC-C datasets: 1,000 warehouses (for a total dataset size of 200GB) on 3 nodes and 10,000 warehouses (for a total dataset size of 2TB) on 30 nodes.
-[Performance Tuning](https://www.cockroachlabs.com/docs/v2.1/performance-tuning) | This new tutorial shows you essential techniques for getting fast reads and writes in CockroachDB, starting with a single-region deployment and expanding into multiple regions.
-[Secure "Build an App"](https://www.cockroachlabs.com/docs/v2.1/build-an-app-with-cockroachdb) | Most client driver and ORM tutorials now provide code samples and guidance for secure clusters.
-[Serializable Transactions](https://www.cockroachlabs.com/docs/v2.1/demo-serializable) | This new tutorial goes through a hypothetical scenario that demonstrates the importance of `SERIALIZABLE` isolation for data correctness.
-[Window Functions](https://www.cockroachlabs.com/docs/v2.1/window-functions) | This new page provides information about window function support in CockroachDB.
diff --git a/src/current/_includes/releases/v2.1/v2.1.1.md b/src/current/_includes/releases/v2.1/v2.1.1.md
deleted file mode 100644
index a181a80ff13..00000000000
--- a/src/current/_includes/releases/v2.1/v2.1.1.md
+++ /dev/null
@@ -1,76 +0,0 @@
-
-
-- Renamed the first column name returned by [`SHOW STATISTICS`](https://www.cockroachlabs.com/docs/v2.1/show-statistics) to `statistics_name`. [#32045][#32045] {% comment %}doc{% endcomment %}
-- CockroachDB now de-correlates and successfully executes many queries containing correlated `EXISTS` subqueries. Previously, these queries caused a de-correlation error. [#32026][#32026] {% comment %}doc{% endcomment %}
-- If [diagnostics reporting](https://www.cockroachlabs.com/docs/v2.1/diagnostics-reporting) is enabled, attempts to use `CREATE/DROP SCHEMA`, `DEFERRABLE`, `CREATE TABLE (LIKE ...)`, `CREATE TABLE ... WITH`, and the "fetch limit" parameter (e.g., via JDBC) will now be collected as telemetry to gauge demand for these currently unsupported features. Also, the name of SQL [built-in functions](https://www.cockroachlabs.com/docs/v2.1/functions-and-operators) will be collected upon evaluation errors. [#31638][#31638] {% comment %}doc{% endcomment %}
-
-
Bug fixes
-
-- Fixed a small memory leak when running distributed queries. [#31759][#31759]
-- The `confkey` column of `pg_catalog.pg_constraint` no longer includes columns that were not involved in the foreign key reference. [#31895][#31895]
-- The [cost-based optimizer](https://www.cockroachlabs.com/docs/v2.1/cost-based-optimizer) no longer chooses the wrong index for a scan because of incorrect selectivity estimation. [#32011][#32011]
-- Fixed a bug that caused transactions to unnecessarily return a "too large" error. [#31821][#31821]
-- Fixed rare deadlocks during [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import), [`RESTORE`](https://www.cockroachlabs.com/docs/v2.1/restore), and [`BACKUP`](https://www.cockroachlabs.com/docs/v2.1/backup). [#32016][#32016]
-- Fixed a panic caused by incorrectly encoded Azure credentials. [#32016][#32016]
-- Fixed a bug in the [cost-based optimizer](https://www.cockroachlabs.com/docs/v2.1/cost-based-optimizer) that sometimes prevented passing ordering requirements through aggregations. [#32089][#32089]
-- Fixed a bug that sometimes caused invalid results or an "incorrectly ordered stream" error with streaming aggregations. [#32097][#32097]
-- Fixed a bug that caused some queries with `DISTINCT ON` and `ORDER BY` with descending columns to return an error incorrectly. [#32175][#32175]
-- Fixed a bug that caused queries with `GROUP BY` or `DISTINCT ON` to return incorrect results or an "incorrectly ordered stream" error. Also improved performance of some aggregations by utilizing streaming aggregation in more cases. [#32175][#32175]
-- Fixed a panic caused by an incorrect assumption in the SQL optimizer code that `ROWS FROM` clauses contain only functions. [#32168][#32168]
-- Fix an error returned by [`cockroach node status`](https://www.cockroachlabs.com/docs/v2.1/view-node-details) after a new node is added to the cluster at a previous node's address. [#32198][#32198]
-- Fixed a mismatch between lookup join planning and execution, which could cause queries to fail with the error "X lookup columns specified, expecting at most Y". [#31896][#31896]
-- Fixed a bug that caused transactions to appear partially committed. CockroachDB was sometimes claiming to have failed to commit a transaction when some (or all) of its writes were actually persisted. [#32220][#32220]
-- Prevented long stalls that can occur in contended transactions. [#32217][#32217]
-- Non-superusers can no longer see other users' sessions and queries via the `ListSessions` and `ListLocalSessions` status server API methods. [#32284][#32284]
-- The graphite metrics sender now collects and sends only the latest data point instead of all data points since startup. [#31888][#31888]
-
-
Performance improvements
-
-- Improved the performance of [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v2.1/as-of-system-time) queries by letting them use the table descriptor cache. [#31756][#31756]
-- The [cost-based optimizer](https://www.cockroachlabs.com/docs/v2.1/cost-based-optimizer) can now determine more keys in certain cases involving unique indexes, potentially resulting in better plans. [#32044][#32044]
-- Within a transaction, when performing a schema change after the table descriptor has been modified, accessing the descriptor should be faster. [#31756][#31756]
-
-
Doc updates
-
-- Corrected the flow control logic of the transaction code sample in the [Build a Java App with CockroachDB](https://www.cockroachlabs.com/docs/v2.1/build-a-java-app-with-cockroachdb) tutorial. [#4047](https://github.com/cockroachdb/docs/pull/4047)
-- Expanded the [Running in a DaemonSet](https://www.cockroachlabs.com/docs/v2.1/kubernetes-performance#running-in-a-daemonset) instruction to cover both insecure and secure deployments. [#4037](https://github.com/cockroachdb/docs/pull/4037)
-- Made it easier to find and link to specific [installation methods](https://www.cockroachlabs.com/docs/v2.1/install-cockroachdb), and updated the Homebrew instructions to note potential conflicts in cases where CockroachDB was previously installed using a different method. [#4032](https://github.com/cockroachdb/docs/pull/4032), [#4036](https://github.com/cockroachdb/docs/pull/4036)
-- Updated the [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import) documentation to cover [importing CockroachDB dump files](https://www.cockroachlabs.com/docs/v2.1/import#import-a-cockroachdb-dump-file). [#4029](https://github.com/cockroachdb/docs/pull/4029)
-
-
-
-
Contributors
-
-This release includes 27 merged PRs by 18 authors. We would like to thank the following contributors from the CockroachDB community:
-
-- Vijay Karthik
-- neeral
-
-
-
-- CockroachDB previously allowed non-authenticated access to privileged HTTP endpoints like `/_admin/v1/events`, which operate using `root` user permissions and can thus access (and sometimes modify) any and all data in the cluster. This security vulnerability has been patched by disallowing non-authenticated access to these endpoints and restricting access to admin users only.
-
- {{site.data.alerts.callout_info}}
- Users who have built monitoring automation using these HTTP endpoints must modify their automation to work using an HTTP session token for an admin user.
- {{site.data.alerts.end}}
-
-- Some Admin UI screens (e.g., Jobs) were previously incorrectly displayed using `root` user permissions, regardless of the logged-in user's credentials. This enabled insufficiently privileged users to access privileged information. This security vulnerability has been patched by using the credentials of the logged-in user to display all Admin UI screens.
-
-- Privileged HTTP endpoints and certain Admin UI screens require an admin user. However, `root` is disallowed from logging in via HTTP and it is not possible to create additional admin accounts without an Enterprise license. This is further discussed [here](https://github.com/cockroachdb/cockroach/issues/43870) and will be addressed in an upcoming patch revision.
-
- {{site.data.alerts.callout_info}}
- Users without an Enterprise license can create an additional admin user using a temporary evaluation license, until an alternative is available. A user created this way will persist beyond the license expiry.
- {{site.data.alerts.end}}
-
-- Some Admin UI screens currently display an error or a blank page when viewed by a non-admin user (e.g., Table Details). This is a known limitation mistakenly introduced by the changes described above. This situation is discussed further [here](https://github.com/cockroachdb/cockroach/issues/44033) and will be addressed in an upcoming patch revision. The list of UI pages affected includes but is not limited to:
-
- - Job details
- - Database details
- - Table details
- - Zone configurations
-
- {{site.data.alerts.callout_info}}
- Users can access these Admin UI screens using an admin user until a fix is available.
- {{site.data.alerts.end}}
-
-The list of HTTP endpoints affected by the first change above includes:
-
-| HTTP Endpoint | Description | Sensitive information revealed | Special (see below) |
-|--------------------------------------------------------|-----------------------------------|----------------------------------------------------|---------------------|
-| `/_admin/v1/data_distribution` | Database-table-node mapping | Database and table names | |
-| `/_admin/v1/databases/{database}/tables/{table}/stats` | Table stats histograms | Stored table data via PK values | |
-| `/_admin/v1/drain` | API to shut down a node | Can cause DoS on cluster | |
-| `/_admin/v1/enqueue_range` | Force range rebalancing | Can cause DoS on cluster | |
-| `/_admin/v1/events` | Event log | Usernames, stored object names, privilege mappings | |
-| `/_admin/v1/nontablestats` | Non-table statistics | Stored table data via PK values | |
-| `/_admin/v1/rangelog` | Range log | Stored table data via PK values | |
-| `/_admin/v1/settings` | Cluster settings | Organization name | |
-| `/_status/allocator/node/{node_id}` | Rebalance simulator | Can cause DoS on cluster | yes |
-| `/_status/allocator/range/{range_id}` | Rebalance simulatoor | Can cause DoS on cluster | yes |
-| `/_status/certificates/{node_id}` | Node and user certificates | Credentials | |
-| `/_status/details/{node_id}` | Node details | Internal IP addresses | |
-| `/_status/enginestats/{node_id}` | Storage statistics | Operational details | |
-| `/_status/files/{node_id}` | Retrieve heap and goroutine dumps | Operational details | yes |
-| `/_status/gossip/{node_id}` | Gossip details | Internal IP addresses | yes |
-| `/_status/hotranges` | Ranges with active requests | Stored table data via PK values | |
-| `/_status/local_sessions` | SQL sessions | Cleartext SQL queries | yes |
-| `/_status/logfiles/{node_id}` | List of log files | Operational details | yes |
-| `/_status/logfiles/{node_id}/{file}` | Server logs + entries | Many: names, application data, credentials, etc. | yes |
-| `/_status/logs/{node_id}` | Log entries | Many: names, application data, credentials, etc. | yes |
-| `/_status/profile/{node_id}` | Profiling data | Operational details | |
-| `/_status/raft` | Raft details | Stored table data via PK values | |
-| `/_status/range/{range_id}` | Range details | Stored table data via PK values | |
-| `/_status/ranges/{node_id}` | Range details | Stored table data via PK values | |
-| `/_status/sessions` | SQL sessions | Cleartext SQL queries | yes |
-| `/_status/span` | Statistics per key span | Whether certain table rows exist | |
-| `/_status/stacks/{node_id}` | Stack traces | Application data, stored table data | |
-| `/_status/stores/{node_id}` | Store details | Operational details | |
-
-{{site.data.alerts.callout_info}}
-"Special" endpoints are subject to the [cluster setting](https://www.cockroachlabs.com/docs/v2.1/cluster-settings) `server.remote_debugging.mode`. Unless the setting was customized, clients are only able to connect from the same machine as the node.
-{{site.data.alerts.end}}
-
-
Admin UI changes
-
-- Certain web UI pages (like the list of databases or tables) now restrict their content to match the privileges of the logged-in user. [#42910][#42910]
-- The event log now presents all cluster settings changes, unredacted, when an admin user uses the page. [#42910][#42910]
-- Customization of the UI by users is now only properly saved if the user has write privilege to `system.ui` (i.e., is an admin user). Also, all authenticated users share the same customizations. This is a known limitation and should be lifted in a future version. [#42910][#42910]
-- Access to table statistics are temporarily blocked from access by non-admin users until further notice, for security reasons. [#42910][#42910]
-- Certain debug pages have been blocked from non-admin users for security reasons. [#42910][#42910]
-
-
Bug fixes
-
-- Fixed a rare data corruption bug in RocksDB caused by newer Linux kernel's handling of i_generation on certain file systems. [#41394][#41394]
-- Fixed a bug causing the `cluster_logical_timestamp()` function to sometimes return incorrect results. [#41442][#41442]
-- Fixed a bug causing rapid network disconnections to lead to cluster unavailability because goroutines waited for a connection which would never be initialized to send its first heartbeat. [#42166][#42166]
-- Fixed a case where we incorrectly determine that a query (or part of a query) which contains an `IS NULL` constraint on a unique index column returns at most one row, possibly ignoring a `LIMIT 1` clause. [#42793][#42793]
-- [`ALTER INDEX IF EXISTS`](https://www.cockroachlabs.com/docs/v2.1/alter-index) no longer fails when using an unqualified index name that does not match any existing index. Now it is a no-op. [#42841][#42841]
-- The `CommandQueue` no longer holds on to buffers if they become too large. This prevents unbounded growth of memory that may never be reclaimed. [#42961][#42961]
-- The `CommandQueue` now clears references to objects in its buffers to allow those objects to be reclaimed by the garbage collector. [#42961][#42961]
-- Fixed a bug causing disk stalls to allow a node to continue heartbeating its liveness record and prevent other nodes from taking over its leases, despite being completely unresponsive. [#41734][#41734]
-
-
-
-- CockroachDB v2.1.0 included [security updates]({% link releases/v2.1.md %}#v2-1-10-security-updates) that inadvertently caused some Admin UI pages requiring table details to not display. These pages display properly once again. [#44194][#44194]
-
-
Bug fixes
-
-- Fixed panics caused by certain window functions that operate on tuples. [#43118][#43118]
-- Prevented rare cases of infinite looping on database files written with a CockroachDB version earlier than v2.1.9. [#43255][#43255]
-- [`EXPLAIN`](https://www.cockroachlabs.com/docs/v2.1/explain) can now be used with statements that use [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v2.1/as-of-system-time). [#43306][#43306] {% comment %}doc{% endcomment %}
-- Fixed a panic when a log truncation took place concurrently with a replica being added to a Raft group. [#43314][#43314]
-- Migrating the privileges on the `system.lease` table no longer creates a deadlock during a cluster upgrade. [#43633][#43633]
-
-
-
-- The `CHANGEFEED` [`experimental-avro` option](https://www.cockroachlabs.com/docs/v2.1/create-changefeed#options) has been renamed `experimental_avro`. [#32235][#32235]
-
-
SQL language changes
-
-- The [`IMPORT format (file)`](https://www.cockroachlabs.com/docs/v2.1/import) syntax is deprecated in favor of `IMPORT format file`. Similarly, `IMPORT TABLE ... FROM format (file)` is deprecated in favor of `IMPORT TABLE ... FROM format file`. [#31301][#31301] {% comment %}doc{% endcomment %}
-- CockroachDB now accepts ordinary string values for placeholders of type `BPCHAR`, for compatibility with PostgreSQL clients that use them. [#32661][#32661]
-
-
Command-line changes
-
-- The [`cockroach workload`](https://www.cockroachlabs.com/docs/v2.1/cockroach-workload) command now includes the `kv` load generator. [#32756][#32756] {% comment %}doc{% endcomment %}
-
-
Bug fixes
-
-- Fixed a panic on [`UPDATE ... RETURNING *`](https://www.cockroachlabs.com/docs/v2.1/update) during a schema change. [#32591][#32591]
-- Fixed a panic on [`UPSERT`](https://www.cockroachlabs.com/docs/v2.1/upsert) in the middle of a schema change adding a non-nullable column. [#32730][#32730]
-- Fixed a bug that prevents adding [computed columns](https://www.cockroachlabs.com/docs/v2.1/computed-columns) with the [`NOT NULL`](https://www.cockroachlabs.com/docs/v2.1/not-null) constraint. [#32730][#32730]
-- Fixed a deadlock when using [`ALTER TABLE ... VALIDATE CONSTRAINT`](https://www.cockroachlabs.com/docs/v2.1/validate-constraint) in a transaction with a schema change. [#32850][#32850]
-- Prevented a performance degradation related to overly aggressive Raft log truncations that could occur during [`RESTORE`](https://www.cockroachlabs.com/docs/v2.1/restore) or [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import) operations.
-- Prevented a stall in the processing of Raft snapshots when many snapshots are requested at the same time. [#32414][#32414]
-- [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v2.1/create-changefeed) now escape Kafka topic names, when necessary. [#32235][#32235] {% comment %}doc{% endcomment %}
-- [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v2.1/create-changefeed) now spend dramatically less time flushing Kafka writes. [#32235][#32235]
-- [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v2.1/create-changefeed) with the `experimental_avro` option now work with column `WIDTH`s and `PRECISION`s. [#32484][#32484] {% comment %}doc{% endcomment %}
-- Fixed a bug where Raft proposals could get stuck if forwarded to a leader who could not itself append a new entry to its log. [#32600][#32600]
-- Fixed a bug where calling [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v2.1/create-statistics) on a large table could cause the server to crash due to running out of memory. [#32635][#32635]
-- Fixed a bug that could cause data loss bug when a disk becomes temporarily full. [#32633][#32633]
-- CockroachDB now reports an unimplemented error when a `WHERE` clause is used after [`INSERT ... ON CONFLICT`](https://www.cockroachlabs.com/docs/v2.1/insert). [#32558][#32558] {% comment %}doc{% endcomment %}
-- CockroachDB now properly handles [foreign key cascading actions](https://www.cockroachlabs.com/docs/v2.1/foreign-key#foreign-key-actions) `SET DEFAULT` and `SET NULL` in [`SHOW CREATE`](https://www.cockroachlabs.com/docs/v2.1/show-create) and [`cockroach dump`](https://www.cockroachlabs.com/docs/v2.1/sql-dump). [#32630][#32630]
-- Fixed a crash that could occur during or after a data import on Windows. [#32666][#32666]
-- Lookup joins now properly preserve ordering for outer joins. Previously, under specific conditions, `LEFT JOIN` queries could produce results that did not respect the `ORDER BY` clause. [#32678][#32678]
-- CockroachDB once again enables `admin` users, including `root`, to list all user sessions besides their own. [#32709][#32709]
-- CockroachDB now properly rejects queries that use an invalid function (e.g., an aggregation) in the `SET` clause of an [`UPDATE`](https://www.cockroachlabs.com/docs/v2.1/update) statement. [#32506][#32506]
-- Dates no longer have a time component in their text encoding over the wire. [#32661][#32661]
-- Corrected binary decimal encoding for `NaN`. [#32661][#32661]
-- Prevented a panic when encountering an internal error related to invalid entries in the output of [`SHOW SESSIONS`](https://www.cockroachlabs.com/docs/v2.1/show-sessions). [#32742][#32742]
-- Prevented a panic when running certain subqueries that get planned in a distributed fashion. [#32670][#32670]
-- [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v2.1/create-changefeed) emitting into Kafka more quickly notice new partitions. [#32757][#32757]
-- CockroachDB now properly records statistics for sessions where the value of `application_name` is given by the client during initialization instead of `SET`. [#32755][#32755]
-- CockroachDB now properly evaluates [`CHECK`](https://www.cockroachlabs.com/docs/v2.1/check) constraints after a row conflict in [`INSERT ... ON CONFLICT`](https://www.cockroachlabs.com/docs/v2.1/insert) when the `CHECK` constraint depends on a column not assigned by `DO UPDATE SET`. [#32780][#32780]
-- The [`cockroach workload run`](https://www.cockroachlabs.com/docs/v2.1/cockroach-workload) subcommand no longer applies to data-only generators. [#32827][#32827] {% comment %}doc{% endcomment %}
-- Fixed a bug where metadata about contended keys was inadvertently ignored, in rare cases allowing for a failure in transaction cycle detection and transaction deadlocks. [#32853][#32853]
-
-
Performance improvements
-
-- Changed the default value for the `kv.transaction.write_pipelining_max_batch_size` [cluster setting](https://www.cockroachlabs.com/docs/v2.1/cluster-settings) to `128`. This speeds up bulk write operations. [#32621][#32621] {% comment %}doc{% endcomment %}
-
-
Doc updates
-
-- Documented the [`cockroach workload`](https://www.cockroachlabs.com/docs/v2.1/cockroach-workload) command, which provides built-in load generators for simulating different types of client workloads, and updated various tutorials to use these workloads. [#4087](https://github.com/cockroachdb/docs/pull/4087)
-- Expanded the [`cockroach demo`](https://www.cockroachlabs.com/docs/v2.1/cockroach-demo) documentation to explain the use of built-in datasets. [#4087](https://github.com/cockroachdb/docs/pull/4087)
-- Added a secure version of the [Performance Tuning](https://www.cockroachlabs.com/docs/v2.1/performance-tuning) tutorial. [#4123](https://github.com/cockroachdb/docs/pull/4123)
-- Clarified that primary key columns cannot be [stored with a secondary index](https://www.cockroachlabs.com/docs/v2.1/create-index). [#4098](https://github.com/cockroachdb/docs/pull/4098)
-- Clarified when to use [`DELETE`](https://www.cockroachlabs.com/docs/v2.1/delete) vs. [`TRUNCATE`](https://www.cockroachlabs.com/docs/v2.1/truncate). [#4094](https://github.com/cockroachdb/docs/pull/4094)
-- Added important considerations when setting up [clock synchronization](https://www.cockroachlabs.com/docs/v2.1/recommended-production-settings#clock-synchronization).
-- Clarified the documentation on [automatic transaction retries](https://www.cockroachlabs.com/docs/v2.1/transactions#automatic-retries). [#4044](https://github.com/cockroachdb/docs/pull/4044)
-
-
-
-- Resolved a cluster degradation scenario that could occur during [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.1/restore) operations, which manifested through a high number of pending Raft snapshots. [#33015][#33015]
-- Fixed a bug that could cause under-replication or unavailability in 5-node clusters and those using high replication factors. [#33047][#33047]
-- Fixed an infinite loop in a low-level scanning routine that could be hit in unusual circumstances. [#33065][#33065]
-
-
Build changes
-
-- `ncurses` is now linked statically so that the `cockroach` binary no longer requires a particular version of the `ncurses` shared library to be available on deployment machines. [#32960][#32960] {% comment %}doc{% endcomment %}
-
-
-
-- It is now possible to use AWS S3 temporary credentials for [`BACKUP`](https://www.cockroachlabs.com/docs/v2.1/backup)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.1/restore) and [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import)/[`EXPORT`](https://www.cockroachlabs.com/docs/v2.1/export) using the `AWS_SESSION_TOKEN` parameter in the URL. [#33046][#33046] {% comment %}doc{% endcomment %}}
-
-
SQL language changes
-
-- Added support for the `pg_catalog` introspection table `pg_am` for both PostgreSQL versions 9.5 and 9.6, which changed the table significantly. [#33276][#33276]
-- Previously, CockroachDB did not consider the value of the right operand for `<<` and `>>` operators, resulting in potentially very large results and excessive RAM consumption. This has been fixed to restrict the range of these values to that supported for the left operand. [#33247][#33247]
-- Attempts to use some PostgreSQL built-in functions that are not yet supported in CockroachDB now cause a clearer error message, and also become reported in [diagnostics reporting](https://www.cockroachlabs.com/docs/v2.1/diagnostics-reporting), if diagnostics reporting is enabled, so as to gauge demand. [#33427][#33427] {% comment %}doc{% endcomment %}}
-
-
Bug fixes
-
-- Fixed a bug where schema changes could get stuck for 5 minutes when executed immediately after a server restart. [#33062][#33062]
-- Fixed a bug with returning dropped unique columns in [`DELETE`](https://www.cockroachlabs.com/docs/v2.1/delete) statements with `RETURNING`. [#33541][#33541]
-- [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v2.1/create-changefeed)s and incremental [`BACKUP`](https://www.cockroachlabs.com/docs/v2.1/backup)s no longer indefinitely hang under an infrequent condition. [#33141][#33141]
-- The output of [`cockroach node status --ranges`](https://www.cockroachlabs.com/docs/v2.1/view-node-details) previously listed the count of under-replicated ranges in the `ranges_unavailable` column and the number of unavailable ranges in the `ranges_underreplicated` column. This fixes that mix-up. [#32951][#32951]
-- Fixed a possible goroutine leak when canceling queries. [#33137][#33137]
-- Cancel requests (via the pgwire protocol) now close quickly with an EOF instead of hanging but still do not cancel the request. [#33246][#33246]
-- Fixed pgwire binary decoding of decimal `NaN` and `NULL` values in arrays. [#33306][#33306]
-- The [`UPSERT`](https://www.cockroachlabs.com/docs/v2.1/upsert) and [`INSERT ON CONFLICT`](https://www.cockroachlabs.com/docs/v2.1/insert) statements now properly check that the user has the `SELECT` privilege on the target table. [#33359][#33359]
-- CockroachDB does not crash upon running [`SHOW SESSIONS`](https://www.cockroachlabs.com/docs/v2.1/show-sessions), [`SHOW QUERIES`](https://www.cockroachlabs.com/docs/v2.1/show-queries), and inspections of some `crdb_internal` tables when certain SQL sessions are issuing internal SQL queries. [#33261][#33261]
-- CockroachDB no longer reports under-replicated ranges corresponding to replicas that are waiting to be deleted. [#33407][#33407]
-- Fixed a panic that could result from not supplying a nullable column in an [`INSERT ON CONFLICT ... DO UPDATE`](https://www.cockroachlabs.com/docs/v2.1/insert) statement. [#33309][#33309]
-- Resolved a cluster degradation scenario that could occur during [`IMPORT`](https://www.cockroachlabs.com/docs/v2.1/import)/[`RESTORE`](https://www.cockroachlabs.com/docs/v2.1/restore) operations, which manifested through a high number of pending Raft snapshots. [#33587][#33587]
-- Fixed a panic caused by some queries involving lookup joins where an input ordering must be preserved. [#33522][#33522]
-- Prevented a panic with certain queries that use the statement source (square bracket) syntax. [#33723][#33723]
-- [Window functions](https://www.cockroachlabs.com/docs/v2.1/window-functions) with non-empty `PARTITION BY` and `ORDER BY` clauses are now handled correctly when invoked via an external driver. [#33671][#33671]
-
-
Performance improvements
-
-- Improved the execution plans of some queries using `LIKE`. [#33072][#33072]
-
-
Doc updates
-
-- The new [Life of a Distributed Transaction](https://www.cockroachlabs.com/docs/v2.1/architecture/life-of-a-distributed-transaction) details the path that a query takes through CockroachDB's architecture, starting with a SQL client and progressing all the way to RocksDB (and then back out again). [#4281](https://github.com/cockroachdb/docs/pull/4281)
-- Updated the [Production Checklist](https://www.cockroachlabs.com/docs/v2.1/recommended-production-settings) with more current hardware recommendations and additional guidance on storage, file systems, and clock synch. [#4153](https://github.com/cockroachdb/docs/pull/4153)
-- Expanded the [SQLAlchemy tutorial](https://www.cockroachlabs.com/docs/v2.1/build-a-python-app-with-cockroachdb-sqlalchemy) to provide code for transaction retries and best practices for using SQLAlchemy with CockroachDB. [#4142](https://github.com/cockroachdb/docs/pull/4142)
-
-
Contributors
-
-This release includes 33 merged PRs by 17 authors. We would especially like to thank first-time contributor shakeelrao.
-
-[#32951]: https://github.com/cockroachdb/cockroach/pull/32951
-[#33046]: https://github.com/cockroachdb/cockroach/pull/33046
-[#33062]: https://github.com/cockroachdb/cockroach/pull/33062
-[#33072]: https://github.com/cockroachdb/cockroach/pull/33072
-[#33137]: https://github.com/cockroachdb/cockroach/pull/33137
-[#33141]: https://github.com/cockroachdb/cockroach/pull/33141
-[#33246]: https://github.com/cockroachdb/cockroach/pull/33246
-[#33247]: https://github.com/cockroachdb/cockroach/pull/33247
-[#33261]: https://github.com/cockroachdb/cockroach/pull/33261
-[#33276]: https://github.com/cockroachdb/cockroach/pull/33276
-[#33306]: https://github.com/cockroachdb/cockroach/pull/33306
-[#33309]: https://github.com/cockroachdb/cockroach/pull/33309
-[#33359]: https://github.com/cockroachdb/cockroach/pull/33359
-[#33407]: https://github.com/cockroachdb/cockroach/pull/33407
-[#33427]: https://github.com/cockroachdb/cockroach/pull/33427
-[#33522]: https://github.com/cockroachdb/cockroach/pull/33522
-[#33541]: https://github.com/cockroachdb/cockroach/pull/33541
-[#33587]: https://github.com/cockroachdb/cockroach/pull/33587
-[#33671]: https://github.com/cockroachdb/cockroach/pull/33671
-[#33723]: https://github.com/cockroachdb/cockroach/pull/33723
diff --git a/src/current/_includes/releases/v2.1/v2.1.5.md b/src/current/_includes/releases/v2.1/v2.1.5.md
deleted file mode 100644
index 89f15eea273..00000000000
--- a/src/current/_includes/releases/v2.1/v2.1.5.md
+++ /dev/null
@@ -1,68 +0,0 @@
-
-
-- Added support for standard HTTP proxy environment variables in HTTP and S3 storage. [#34535][#34535] {% comment %}doc{% endcomment %}
-
-
SQL language changes
-
-- It is now possible to force a reverse scan of a specific index using `table@{FORCE_INDEX=index,DESC}`. [#34121][#34121] {% comment %}doc{% endcomment %}
-- The value of `information_schema.columns.character_maximum_column` is set to `NULL` for all integer types, for compatibility with PostgreSQL. [#34201][#34201] {% comment %}doc{% endcomment %}
-
-
Command-line changes
-
-- Fixed a bug in [`cockroach node status`](https://www.cockroachlabs.com/docs/v2.1/view-node-details) that prevented it from displaying down nodes in the cluster in some circumstances. [#34503][#34503]
-
-
Bug fixes
-
-- Lookup joins now properly preserve their input order even if more than one row of the input corresponds to the same row of the lookup table. [#33730][#33730]
-- Fixed a goroutine leak that would occur while a cluster was unavailable (or a subset of nodes partitioned away from the cluster) and would cause a resource spike to resolve. [#34144][#34144]
-- Fixed panics or incorrect results in some cases when grouping on constant columns (either with `GROUP BY` or `DISTINCT ON`). [#34168][#34168]
-- The values reported in `information_schema.columns` for integer columns created prior to CockroachDB v2.1 as `BIT` are now fixed and consistent with other integer types. [#34201][#34201]
-- Fixed a bug that would delay Raft log truncations. [#34284][#34284]
-- Prevented down-replicating widely replicated ranges when nodes in the cluster are temporarily down. [#34199][#34199]
-- CockroachDB now enables re-starting a node at an address previously allocated for another node. [#34198][#34198]
-- [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v2.1/create-changefeed)s now can be started on tables that have been backfilled by schema changes. [#34362][#34362]
-- Fixed a back up in flow creation observed by "no inbound stream connection" caused by not releasing a lock before attempting a possibly blocking operation. [#34364][#34364]
-- Fixed a panic when updating a job that doesn't exist. [#34672][#34672]
-- Fixed a bug in [`RESTORE`](https://www.cockroachlabs.com/docs/v2.1/restore) that prevented restoring some [`BACKUP`](https://www.cockroachlabs.com/docs/v2.1/backup)s containing previously dropped or truncated interleaved tables. [#34719][#34719]
-- The value of the `attnum` column in `pg_catalog.pg_attribute` now remains stable across column drops. [#34734][#34734]
-- Prevented a problem that would cause the Raft log to grow very large, which in turn could prevent replication changes. [#34774][#34774]
-- Prevented down nodes from obstructing Raft log truncation on ranges they are a member of. This problem could cause replication to fail due to an overly large Raft log. [#34774][#34774]
-- Fixed a bug that would incorrectly cause JSON field access equality comparisons to be true when they should be false. [#32214][#32214]
-
-
Performance improvements
-
-- Index joins, lookup joins, foreign key checks, cascade scans, zig zag joins, and `UPSERT`s no longer needlessly scan over child interleaved tables when searching for keys. [#33652][#33652]
-
-
Doc updates
-
-- Updated the [SQL Performance Best Practices](https://www.cockroachlabs.com/docs/v2.1/performance-best-practices-overview#interleave-tables) with caveats around interleaving tables. [#4273](https://github.com/cockroachdb/docs/pull/4273)
-- Added a note that when a table that was previously [split](https://www.cockroachlabs.com/docs/v2.1/split-at) is truncated, the table must be pre-split again. [#4274](https://github.com/cockroachdb/docs/pull/4274)
-- Added guidance on [removing `UNIQUE` constraints](https://www.cockroachlabs.com/docs/v2.1/constraints#remove-constraints). [#4276](https://github.com/cockroachdb/docs/pull/4276)
-- Added a [warning about cross-store rebalancing](https://www.cockroachlabs.com/docs/v2.1/start-a-node#store) not working as expected in 3-node clusters with multiple stores per node. [#4320](https://github.com/cockroachdb/docs/pull/4320)
-
-
-
-- Fixed a panic when the subquery in `UPDATE SET (a,b) = (...subquery...)` returns no rows. [#34805][#34805]
-- CockroachDB now only lists tables in `pg_catalog.pg_tables`, for compatibility with PostgreSQL. [#34858][#34858]
-- Fixed a panic during some `UNION ALL` operations with projections, filters, or renders directly on top of the `UNION ALL`. [#34913][#34913]
-- Fixed a planning bug that caused incorrect aggregation results on multi-node aggregations with implicit, partial orderings on the inputs to the aggregations. [#35259][#35259]
-
-
Doc updates
-
-- Added much more guidance on [troubleshooting cluster setup](https://www.cockroachlabs.com/docs/v2.1/cluster-setup-troubleshooting) and [troubleshooting SQL behavior](https://www.cockroachlabs.com/docs/v2.1/query-behavior-troubleshooting). [#4223](https://github.com/cockroachdb/docs/pull/4223)
-
-
-
-- Fixed a bug in [`RESTORE`](https://www.cockroachlabs.com/docs/v2.1/restore) where some unusual range boundaries in [interleaved tables](https://www.cockroachlabs.com/docs/v2.1/interleave-in-parent) caused an error. [#36006][#36006]
-- CockroachDB now properly applies column width and nullability constraints on the result of conflict resolution in [`UPSERT`](https://www.cockroachlabs.com/docs/v2.1/upsert) and [`INSERT ON CONFLICT`](https://www.cockroachlabs.com/docs/v2.1/insert). [#35373][#35373]
-- Subtracting `0` from a [`JSONB`](https://www.cockroachlabs.com/docs/v2.1/jsonb) array now correctly removes its first element. [#35619][#35619]
-- Fixed an on-disk inconsistency that could result from a crash during a range merge. [#35752][#35752]
-- While a cluster is unavailable (e.g., during a network partition), memory and goroutines used for authenticating connections no longer leak when the client closes said connections. [#36231][#36231]
-- Single column family [`JSONB`](https://www.cockroachlabs.com/docs/v2.1/jsonb) columns are now decoded correctly. [#36628][#36628]
-- Fixed a rare inconsistency that could occur on overloaded clusters. [#36960][#36960]
-- Fixed a possible panic while recovering from a WAL on which a sync operation failed. [#37214][#37214]
-- Reduced the risk of data unavailability during AZ/region failure. [#37336][#37336]
-
-
Build changes
-
-- CockroachDB will provisionally refuse to build with go 1.12, as this is known to produce incorrect code inside CockroachDB. [#35639][#35639]
-- Release Docker images are now built on Debian 9.8. [#35737][#35737]
-
-
-
-- Fixed crashes when trying to run certain `SHOW` commands via the pgwire prepare path. [#37891][#37891]
-- Fixed a rare crash ("close of closed channel") that would occur when shutting down a server. [#37893][#37893]
-- Previously, due to a bug when estimating result set sizes in the [Optimizer](https://www.cockroachlabs.com/docs/v2.1/cost-based-optimizer), queries involving large `INT` ranges could result in poor plans being generated. [#38039][#38039]
-- `NULL`s are now correctly handled by `MIN`, `SUM`, and `AVG` when used as [window functions](https://www.cockroachlabs.com/docs/v2.1/window-functions). [#38357][#38357]
-- Prevented a possible missing row from queries that involved iterator reuse and seeking into the gap between stables bridged by a range tombstone. [#37694][#37694]
-
-
Security improvements
-
-- Only check `CommonName` on first certificate in file. [#38166][#38166]
-- Stack memory used by CockroachDB is now marked as non-executable, improving security and compatibility with SELinux. [#38134][#38134]
-
-
-
-- Fixed a bug that could lead to data inconsistencies and crashes with the message `consistency check failed with N inconsistent replicas`. [#40353][#40353]
-- Fixed incorrect results, or "unordered span" errors, in some cases involving exclusive inequalities with non-numeric types. [#38897][#38897]
-- Fixed a potential infinite loop in queries involving reverse scans. [#39105][#39105]
-- Unary negatives in constant arithmetic expressions are no longer ignored. [#39368][#39368]
-- Fix wrong comparator used in RocksDB compaction picker, which can lead to infinite compaction loop. [#40752][#40752]
-- Fix bug where MVCC value at future timestamp is returned after a transaction restart. [#40632][#40632]
-- Consider intents in a read's uncertainty interval to be uncertain just as if they were committed values. This removes the potential for stale reads when a causally dependent transaction runs into the not-yet resolved intents from a causal ancestor. [#40632][#40632]
-
-
- How the system refers to this metric, e.g., sql.bytesin.
-
-
-
-
- Downsampler
-
-
-
- The "Downsampler" operation is used to combine the individual datapoints over the longer period into a single datapoint. We store one data point every ten seconds, but for queries over long time spans the backend lowers the resolution of the returned data, perhaps only returning one data point for every minute, five minutes, or even an entire hour in the case of the 30 day view.
-
-
- Options:
-
-
AVG: Returns the average value over the time period.
-
MIN: Returns the lowest value seen.
-
MAX: Returns the highest value seen.
-
SUM: Returns the sum of all values seen.
-
-
-
-
-
-
- Aggregator
-
-
-
- Used to combine data points from different nodes. It has the same operations available as the Downsampler.
-
-
- Options:
-
-
AVG: Returns the average value over the time period.
-
MIN: Returns the lowest value seen.
-
MAX: Returns the highest value seen.
-
SUM: Returns the sum of all values seen.
-
-
-
-
-
-
- Rate
-
-
-
- Determines how to display the rate of change during the selected time period.
-
-
- Options:
-
-
-
- Normal: Returns the actual recorded value.
-
-
- Rate: Returns the rate of change of the value per second.
-
-
- Non-negative Rate: Returns the rate-of-change, but returns 0 instead of negative values. A large number of the stats we track are actually tracked as monotonically increasing counters so each sample is just the total value of that counter. The rate of change of that counter represents the rate of events being counted, which is usually what you want to graph. "Non-negative Rate" is needed because the counters are stored in memory, and thus if a node resets it goes back to zero (whereas normally they only increase).
-
-
-
-
-
-
-
- Source
-
-
- The set of nodes being queried, which is either:
-
-
- The entire cluster.
-
-
- A single, named node.
-
-
-
-
-
-
- Per Node
-
-
- If checked, the chart will show a line for each node's value of this metric.
-
-
-
-
diff --git a/src/current/_includes/v2.1/app/BasicSample.java b/src/current/_includes/v2.1/app/BasicSample.java
deleted file mode 100644
index 244694e8859..00000000000
--- a/src/current/_includes/v2.1/app/BasicSample.java
+++ /dev/null
@@ -1,55 +0,0 @@
-import java.sql.*;
-import java.util.Properties;
-
-/*
- Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.
-
- Then, compile and run this example like so:
-
- $ export CLASSPATH=.:/path/to/postgresql.jar
- $ javac BasicSample.java && java BasicSample
-*/
-
-public class BasicSample {
- public static void main(String[] args)
- throws ClassNotFoundException, SQLException {
-
- // Load the Postgres JDBC driver.
- Class.forName("org.postgresql.Driver");
-
- // Connect to the "bank" database.
- Properties props = new Properties();
- props.setProperty("user", "maxroach");
- props.setProperty("sslmode", "require");
- props.setProperty("sslrootcert", "certs/ca.crt");
- props.setProperty("sslkey", "certs/client.maxroach.pk8");
- props.setProperty("sslcert", "certs/client.maxroach.crt");
- props.setProperty("ApplicationName", "roachtest");
-
- Connection db = DriverManager
- .getConnection("jdbc:postgresql://127.0.0.1:26257/bank", props);
-
- try {
- // Create the "accounts" table.
- db.createStatement()
- .execute("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)");
-
- // Insert two rows into the "accounts" table.
- db.createStatement()
- .execute("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)");
-
- // Print out the balances.
- System.out.println("Initial balances:");
- ResultSet res = db.createStatement()
- .executeQuery("SELECT id, balance FROM accounts");
- while (res.next()) {
- System.out.printf("\taccount %s: %s\n",
- res.getInt("id"),
- res.getInt("balance"));
- }
- } finally {
- // Close the database connection.
- db.close();
- }
- }
-}
diff --git a/src/current/_includes/v2.1/app/TxnSample.java b/src/current/_includes/v2.1/app/TxnSample.java
deleted file mode 100644
index 8873b2e0385..00000000000
--- a/src/current/_includes/v2.1/app/TxnSample.java
+++ /dev/null
@@ -1,148 +0,0 @@
-import java.sql.*;
-import java.util.Properties;
-
-/*
- Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.
-
- Then, compile and run this example like so:
-
- $ export CLASSPATH=.:/path/to/postgresql.jar
- $ javac TxnSample.java && java TxnSample
-*/
-
-// Ambiguous whether the transaction committed or not.
-class AmbiguousCommitException extends SQLException{
- public AmbiguousCommitException(Throwable cause) {
- super(cause);
- }
-}
-
-class InsufficientBalanceException extends Exception {}
-
-class AccountNotFoundException extends Exception {
- public int account;
- public AccountNotFoundException(int account) {
- this.account = account;
- }
-}
-
-// A simple interface that provides a retryable lambda expression.
-interface RetryableTransaction {
- public void run(Connection conn)
- throws SQLException, InsufficientBalanceException,
- AccountNotFoundException, AmbiguousCommitException;
-}
-
-public class TxnSample {
- public static RetryableTransaction transferFunds(int from, int to, int amount) {
- return new RetryableTransaction() {
- public void run(Connection conn)
- throws SQLException, InsufficientBalanceException,
- AccountNotFoundException, AmbiguousCommitException {
-
- // Check the current balance.
- ResultSet res = conn.createStatement()
- .executeQuery("SELECT balance FROM accounts WHERE id = "
- + from);
- if(!res.next()) {
- throw new AccountNotFoundException(from);
- }
-
- int balance = res.getInt("balance");
- if(balance < from) {
- throw new InsufficientBalanceException();
- }
-
- // Perform the transfer.
- conn.createStatement()
- .executeUpdate("UPDATE accounts SET balance = balance - "
- + amount + " where id = " + from);
- conn.createStatement()
- .executeUpdate("UPDATE accounts SET balance = balance + "
- + amount + " where id = " + to);
- }
- };
- }
-
- public static void retryTransaction(Connection conn, RetryableTransaction tx)
- throws SQLException, InsufficientBalanceException,
- AccountNotFoundException, AmbiguousCommitException {
-
- Savepoint sp = conn.setSavepoint("cockroach_restart");
- while(true) {
- boolean releaseAttempted = false;
- try {
- tx.run(conn);
- releaseAttempted = true;
- conn.releaseSavepoint(sp);
- break;
- }
- catch(SQLException e) {
- String sqlState = e.getSQLState();
-
- // Check if the error code indicates a SERIALIZATION_FAILURE.
- if(sqlState.equals("40001")) {
- // Signal the database that we will attempt a retry.
- conn.rollback(sp);
- } else if(releaseAttempted) {
- throw new AmbiguousCommitException(e);
- } else {
- throw e;
- }
- }
- }
- conn.commit();
- }
-
- public static void main(String[] args)
- throws ClassNotFoundException, SQLException {
-
- // Load the Postgres JDBC driver.
- Class.forName("org.postgresql.Driver");
-
- // Connect to the 'bank' database.
- Properties props = new Properties();
- props.setProperty("user", "maxroach");
- props.setProperty("sslmode", "require");
- props.setProperty("sslrootcert", "certs/ca.crt");
- props.setProperty("sslkey", "certs/client.maxroach.pk8");
- props.setProperty("sslcert", "certs/client.maxroach.crt");
- props.setProperty("ApplicationName", "roachtest");
-
- Connection db = DriverManager
- .getConnection("jdbc:postgresql://127.0.0.1:26257/bank", props);
-
-
- try {
- // We need to turn off autocommit mode to allow for
- // multi-statement transactions.
- db.setAutoCommit(false);
-
- // Perform the transfer. This assumes the 'accounts'
- // table has already been created in the database.
- RetryableTransaction transfer = transferFunds(1, 2, 100);
- retryTransaction(db, transfer);
-
- // Check balances after transfer.
- db.setAutoCommit(true);
- ResultSet res = db.createStatement()
- .executeQuery("SELECT id, balance FROM accounts");
- while (res.next()) {
- System.out.printf("\taccount %s: %s\n", res.getInt("id"),
- res.getInt("balance"));
- }
-
- } catch(InsufficientBalanceException e) {
- System.out.println("Insufficient balance");
- } catch(AccountNotFoundException e) {
- System.out.println("No users in the table with id " + e.account);
- } catch(AmbiguousCommitException e) {
- System.out.println("Ambiguous result encountered: " + e);
- } catch(SQLException e) {
- System.out.println("SQLException encountered:" + e);
- } finally {
- // Close the database connection.
- db.close();
- }
- }
-}
diff --git a/src/current/_includes/v2.1/app/activerecord-basic-sample.rb b/src/current/_includes/v2.1/app/activerecord-basic-sample.rb
deleted file mode 100644
index f1d35e1de3a..00000000000
--- a/src/current/_includes/v2.1/app/activerecord-basic-sample.rb
+++ /dev/null
@@ -1,48 +0,0 @@
-require 'active_record'
-require 'activerecord-cockroachdb-adapter'
-require 'pg'
-
-# Connect to CockroachDB through ActiveRecord.
-# In Rails, this configuration would go in config/database.yml as usual.
-ActiveRecord::Base.establish_connection(
- adapter: 'cockroachdb',
- username: 'maxroach',
- database: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'require',
- sslrootcert: 'certs/ca.crt',
- sslkey: 'certs/client.maxroach.key',
- sslcert: 'certs/client.maxroach.crt'
-)
-
-
-# Define the Account model.
-# In Rails, this would go in app/models/ as usual.
-class Account < ActiveRecord::Base
- validates :id, presence: true
- validates :balance, presence: true
-end
-
-# Define a migration for the accounts table.
-# In Rails, this would go in db/migrate/ as usual.
-class Schema < ActiveRecord::Migration[5.0]
- def change
- create_table :accounts, force: true do |t|
- t.integer :balance
- end
- end
-end
-
-# Run the schema migration by hand.
-# In Rails, this would be done via rake db:migrate as usual.
-Schema.new.change()
-
-# Create two accounts, inserting two rows into the accounts table.
-Account.create(id: 1, balance: 1000)
-Account.create(id: 2, balance: 250)
-
-# Retrieve accounts and print out the balances
-Account.all.each do |acct|
- puts "#{acct.id} #{acct.balance}"
-end
diff --git a/src/current/_includes/v2.1/app/basic-sample.c b/src/current/_includes/v2.1/app/basic-sample.c
deleted file mode 100644
index e69de29bb2d..00000000000
diff --git a/src/current/_includes/v2.1/app/basic-sample.clj b/src/current/_includes/v2.1/app/basic-sample.clj
deleted file mode 100644
index b139d27b8e1..00000000000
--- a/src/current/_includes/v2.1/app/basic-sample.clj
+++ /dev/null
@@ -1,31 +0,0 @@
-(ns test.test
- (:require [clojure.java.jdbc :as j]
- [test.util :as util]))
-
-;; Define the connection parameters to the cluster.
-(def db-spec {:subprotocol "postgresql"
- :subname "//localhost:26257/bank"
- :user "maxroach"
- :password ""})
-
-(defn test-basic []
- ;; Connect to the cluster and run the code below with
- ;; the connection object bound to 'conn'.
- (j/with-db-connection [conn db-spec]
-
- ;; Insert two rows into the "accounts" table.
- (j/insert! conn :accounts {:id 1 :balance 1000})
- (j/insert! conn :accounts {:id 2 :balance 250})
-
- ;; Print out the balances.
- (println "Initial balances:")
- (->> (j/query conn ["SELECT id, balance FROM accounts"])
- (map println)
- doall)
-
- ;; The database connection is automatically closed by with-db-connection.
- ))
-
-
-(defn -main [& args]
- (test-basic))
diff --git a/src/current/_includes/v2.1/app/basic-sample.cpp b/src/current/_includes/v2.1/app/basic-sample.cpp
deleted file mode 100644
index 0cdb6f65bfd..00000000000
--- a/src/current/_includes/v2.1/app/basic-sample.cpp
+++ /dev/null
@@ -1,41 +0,0 @@
-// Build with g++ -std=c++11 basic-sample.cpp -lpq -lpqxx
-
-#include
-#include
-#include
-#include
-#include
-#include
-
-using namespace std;
-
-int main() {
- try {
- // Connect to the "bank" database.
- pqxx::connection c("postgresql://maxroach@localhost:26257/bank");
-
- pqxx::nontransaction w(c);
-
- // Create the "accounts" table.
- w.exec("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)");
-
- // Insert two rows into the "accounts" table.
- w.exec("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)");
-
- // Print out the balances.
- cout << "Initial balances:" << endl;
- pqxx::result r = w.exec("SELECT id, balance FROM accounts");
- for (auto row : r) {
- cout << row[0].as() << ' ' << row[1].as() << endl;
- }
-
- w.commit(); // Note this doesn't doesn't do anything
- // for a nontransaction, but is still required.
- }
- catch (const exception &e) {
- cerr << e.what() << endl;
- return 1;
- }
- cout << "Success" << endl;
- return 0;
-}
diff --git a/src/current/_includes/v2.1/app/basic-sample.cs b/src/current/_includes/v2.1/app/basic-sample.cs
deleted file mode 100644
index d17f772e2cd..00000000000
--- a/src/current/_includes/v2.1/app/basic-sample.cs
+++ /dev/null
@@ -1,101 +0,0 @@
-using System;
-using System.Data;
-using System.Security.Cryptography.X509Certificates;
-using System.Net.Security;
-using Npgsql;
-
-namespace Cockroach
-{
- class MainClass
- {
- static void Main(string[] args)
- {
- var connStringBuilder = new NpgsqlConnectionStringBuilder();
- connStringBuilder.Host = "localhost";
- connStringBuilder.Port = 26257;
- connStringBuilder.SslMode = SslMode.Require;
- connStringBuilder.Username = "maxroach";
- connStringBuilder.Database = "bank";
- Simple(connStringBuilder.ConnectionString);
- }
-
- static void Simple(string connString)
- {
- using (var conn = new NpgsqlConnection(connString))
- {
- conn.ProvideClientCertificatesCallback += ProvideClientCertificatesCallback;
- conn.UserCertificateValidationCallback += UserCertificateValidationCallback;
- conn.Open();
-
- // Create the "accounts" table.
- new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery();
-
- // Insert two rows into the "accounts" table.
- using (var cmd = new NpgsqlCommand())
- {
- cmd.Connection = conn;
- cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)";
- cmd.Parameters.AddWithValue("id1", 1);
- cmd.Parameters.AddWithValue("val1", 1000);
- cmd.Parameters.AddWithValue("id2", 2);
- cmd.Parameters.AddWithValue("val2", 250);
- cmd.ExecuteNonQuery();
- }
-
- // Print out the balances.
- System.Console.WriteLine("Initial balances:");
- using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
- using (var reader = cmd.ExecuteReader())
- while (reader.Read())
- Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
- }
- }
-
- static void ProvideClientCertificatesCallback(X509CertificateCollection clientCerts)
- {
- // To be able to add a certificate with a private key included, we must convert it to
- // a PKCS #12 format. The following openssl command does this:
- // openssl pkcs12 -password pass: -inkey client.maxroach.key -in client.maxroach.crt -export -out client.maxroach.pfx
- // As of 2018-12-10, you need to provide a password for this to work on macOS.
- // See https://github.com/dotnet/corefx/issues/24225
-
- // Note that the password used during X509 cert creation below
- // must match the password used in the openssl command above.
- clientCerts.Add(new X509Certificate2("certs/client.maxroach.pfx", "pass"));
- }
-
- // By default, .Net does all of its certificate verification using the system certificate store.
- // This callback is necessary to validate the server certificate against a CA certificate file.
- static bool UserCertificateValidationCallback(object sender, X509Certificate certificate, X509Chain defaultChain, SslPolicyErrors defaultErrors)
- {
- X509Certificate2 caCert = new X509Certificate2("certs/ca.crt");
- X509Chain caCertChain = new X509Chain();
- caCertChain.ChainPolicy = new X509ChainPolicy()
- {
- RevocationMode = X509RevocationMode.NoCheck,
- RevocationFlag = X509RevocationFlag.EntireChain
- };
- caCertChain.ChainPolicy.ExtraStore.Add(caCert);
-
- X509Certificate2 serverCert = new X509Certificate2(certificate);
-
- caCertChain.Build(serverCert);
- if (caCertChain.ChainStatus.Length == 0)
- {
- // No errors
- return true;
- }
-
- foreach (X509ChainStatus status in caCertChain.ChainStatus)
- {
- // Check if we got any errors other than UntrustedRoot (which we will always get if we do not install the CA cert to the system store)
- if (status.Status != X509ChainStatusFlags.UntrustedRoot)
- {
- return false;
- }
- }
- return true;
- }
-
- }
-}
diff --git a/src/current/_includes/v2.1/app/basic-sample.go b/src/current/_includes/v2.1/app/basic-sample.go
deleted file mode 100644
index 6e22c858dbb..00000000000
--- a/src/current/_includes/v2.1/app/basic-sample.go
+++ /dev/null
@@ -1,46 +0,0 @@
-package main
-
-import (
- "database/sql"
- "fmt"
- "log"
-
- _ "github.com/lib/pq"
-)
-
-func main() {
- // Connect to the "bank" database.
- db, err := sql.Open("postgres",
- "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt")
- if err != nil {
- log.Fatal("error connecting to the database: ", err)
- }
- defer db.Close()
-
- // Create the "accounts" table.
- if _, err := db.Exec(
- "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); err != nil {
- log.Fatal(err)
- }
-
- // Insert two rows into the "accounts" table.
- if _, err := db.Exec(
- "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); err != nil {
- log.Fatal(err)
- }
-
- // Print out the balances.
- rows, err := db.Query("SELECT id, balance FROM accounts")
- if err != nil {
- log.Fatal(err)
- }
- defer rows.Close()
- fmt.Println("Initial balances:")
- for rows.Next() {
- var id, balance int
- if err := rows.Scan(&id, &balance); err != nil {
- log.Fatal(err)
- }
- fmt.Printf("%d %d\n", id, balance)
- }
-}
diff --git a/src/current/_includes/v2.1/app/basic-sample.js b/src/current/_includes/v2.1/app/basic-sample.js
deleted file mode 100644
index 4e86cb2cbca..00000000000
--- a/src/current/_includes/v2.1/app/basic-sample.js
+++ /dev/null
@@ -1,63 +0,0 @@
-var async = require('async');
-var fs = require('fs');
-var pg = require('pg');
-
-// Connect to the "bank" database.
-var config = {
- user: 'maxroach',
- host: 'localhost',
- database: 'bank',
- port: 26257,
- ssl: {
- ca: fs.readFileSync('certs/ca.crt')
- .toString(),
- key: fs.readFileSync('certs/client.maxroach.key')
- .toString(),
- cert: fs.readFileSync('certs/client.maxroach.crt')
- .toString()
- }
-};
-
-// Create a pool.
-var pool = new pg.Pool(config);
-
-pool.connect(function (err, client, done) {
-
- // Close communication with the database and exit.
- var finish = function () {
- done();
- process.exit();
- };
-
- if (err) {
- console.error('could not connect to cockroachdb', err);
- finish();
- }
- async.waterfall([
- function (next) {
- // Create the 'accounts' table.
- client.query('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT);', next);
- },
- function (results, next) {
- // Insert two rows into the 'accounts' table.
- client.query('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250);', next);
- },
- function (results, next) {
- // Print out account balances.
- client.query('SELECT id, balance FROM accounts;', next);
- },
- ],
- function (err, results) {
- if (err) {
- console.error('Error inserting into and selecting from accounts: ', err);
- finish();
- }
-
- console.log('Initial balances:');
- results.rows.forEach(function (row) {
- console.log(row);
- });
-
- finish();
- });
-});
diff --git a/src/current/_includes/v2.1/app/basic-sample.php b/src/current/_includes/v2.1/app/basic-sample.php
deleted file mode 100644
index 4edae09b12a..00000000000
--- a/src/current/_includes/v2.1/app/basic-sample.php
+++ /dev/null
@@ -1,20 +0,0 @@
- PDO::ERRMODE_EXCEPTION,
- PDO::ATTR_EMULATE_PREPARES => true,
- PDO::ATTR_PERSISTENT => true
- ));
-
- $dbh->exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)');
-
- print "Account balances:\r\n";
- foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) {
- print $row['id'] . ': ' . $row['balance'] . "\r\n";
- }
-} catch (Exception $e) {
- print $e->getMessage() . "\r\n";
- exit(1);
-}
-?>
diff --git a/src/current/_includes/v2.1/app/basic-sample.py b/src/current/_includes/v2.1/app/basic-sample.py
deleted file mode 100644
index edf1b2617d0..00000000000
--- a/src/current/_includes/v2.1/app/basic-sample.py
+++ /dev/null
@@ -1,37 +0,0 @@
-# Import the driver.
-import psycopg2
-
-# Connect to the "bank" database.
-conn = psycopg2.connect(
- database='bank',
- user='maxroach',
- sslmode='require',
- sslrootcert='certs/ca.crt',
- sslkey='certs/client.maxroach.key',
- sslcert='certs/client.maxroach.crt',
- port=26257,
- host='localhost'
-)
-
-# Make each statement commit immediately.
-conn.set_session(autocommit=True)
-
-# Open a cursor to perform database operations.
-cur = conn.cursor()
-
-# Create the "accounts" table.
-cur.execute("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)")
-
-# Insert two rows into the "accounts" table.
-cur.execute("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)")
-
-# Print out the balances.
-cur.execute("SELECT id, balance FROM accounts")
-rows = cur.fetchall()
-print('Initial balances:')
-for row in rows:
- print([str(cell) for cell in row])
-
-# Close the database connection.
-cur.close()
-conn.close()
diff --git a/src/current/_includes/v2.1/app/basic-sample.rb b/src/current/_includes/v2.1/app/basic-sample.rb
deleted file mode 100644
index 93f0dc3d20c..00000000000
--- a/src/current/_includes/v2.1/app/basic-sample.rb
+++ /dev/null
@@ -1,31 +0,0 @@
-# Import the driver.
-require 'pg'
-
-# Connect to the "bank" database.
-conn = PG.connect(
- user: 'maxroach',
- dbname: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'require',
- sslrootcert: 'certs/ca.crt',
- sslkey:'certs/client.maxroach.key',
- sslcert:'certs/client.maxroach.crt'
-)
-
-# Create the "accounts" table.
-conn.exec('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)')
-
-# Insert two rows into the "accounts" table.
-conn.exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)')
-
-# Print out the balances.
-puts 'Initial balances:'
-conn.exec('SELECT id, balance FROM accounts') do |res|
- res.each do |row|
- puts row
- end
-end
-
-# Close communication with the database.
-conn.close()
diff --git a/src/current/_includes/v2.1/app/basic-sample.rs b/src/current/_includes/v2.1/app/basic-sample.rs
deleted file mode 100644
index 4a078991cd8..00000000000
--- a/src/current/_includes/v2.1/app/basic-sample.rs
+++ /dev/null
@@ -1,45 +0,0 @@
-use openssl::error::ErrorStack;
-use openssl::ssl::{SslConnector, SslFiletype, SslMethod};
-use postgres::Client;
-use postgres_openssl::MakeTlsConnector;
-
-fn ssl_config() -> Result {
- let mut builder = SslConnector::builder(SslMethod::tls())?;
- builder.set_ca_file("certs/ca.crt")?;
- builder.set_certificate_chain_file("certs/client.maxroach.crt")?;
- builder.set_private_key_file("certs/client.maxroach.key", SslFiletype::PEM)?;
- Ok(MakeTlsConnector::new(builder.build()))
-}
-
-fn main() {
- let connector = ssl_config().unwrap();
- let mut client =
- Client::connect("postgresql://maxroach@localhost:26257/bank", connector).unwrap();
-
- // Create the "accounts" table.
- client
- .execute(
- "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)",
- &[],
- )
- .unwrap();
-
- // Insert two rows into the "accounts" table.
- client
- .execute(
- "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)",
- &[],
- )
- .unwrap();
-
- // Print out the balances.
- println!("Initial balances:");
- for row in &client
- .query("SELECT id, balance FROM accounts", &[])
- .unwrap()
- {
- let id: i64 = row.get(0);
- let balance: i64 = row.get(1);
- println!("{} {}", id, balance);
- }
-}
diff --git a/src/current/_includes/v2.1/app/before-you-begin.md b/src/current/_includes/v2.1/app/before-you-begin.md
deleted file mode 100644
index dfb97226414..00000000000
--- a/src/current/_includes/v2.1/app/before-you-begin.md
+++ /dev/null
@@ -1,8 +0,0 @@
-1. [Install CockroachDB](install-cockroachdb.html).
-2. Start up a [secure](secure-a-cluster.html) or [insecure](start-a-local-cluster.html) local cluster.
-3. Choose the instructions that correspond to whether your cluster is secure or insecure:
-
-
-
-
-
diff --git a/src/current/_includes/v2.1/app/common-steps.md b/src/current/_includes/v2.1/app/common-steps.md
deleted file mode 100644
index b2d6e4deed2..00000000000
--- a/src/current/_includes/v2.1/app/common-steps.md
+++ /dev/null
@@ -1,36 +0,0 @@
-## Step 2. Start a single-node cluster
-
-For the purpose of this tutorial, you need only one CockroachDB node running in insecure mode:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach start \
---insecure \
---store=hello-1 \
---listen-addr=localhost
-~~~
-
-## Step 3. Create a user
-
-In a new terminal, as the `root` user, use the [`cockroach user`](create-and-manage-users.html) command to create a new user, `maxroach`.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach user set maxroach --insecure
-~~~
-
-## Step 4. Create a database and grant privileges
-
-As the `root` user, use the [built-in SQL client](use-the-built-in-sql-client.html) to create a `bank` database.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure -e 'CREATE DATABASE bank'
-~~~
-
-Then [grant privileges](grant.html) to the `maxroach` user.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure -e 'GRANT ALL ON DATABASE bank TO maxroach'
-~~~
diff --git a/src/current/_includes/v2.1/app/create-maxroach-user-and-bank-database.md b/src/current/_includes/v2.1/app/create-maxroach-user-and-bank-database.md
deleted file mode 100644
index e887162f380..00000000000
--- a/src/current/_includes/v2.1/app/create-maxroach-user-and-bank-database.md
+++ /dev/null
@@ -1,32 +0,0 @@
-Start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --certs-dir=certs
-~~~
-
-In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE USER IF NOT EXISTS maxroach;
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE DATABASE bank;
-~~~
-
-Give the `maxroach` user the necessary permissions:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> GRANT ALL ON DATABASE bank TO maxroach;
-~~~
-
-Exit the SQL shell:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> \q
-~~~
diff --git a/src/current/_includes/v2.1/app/gorm-basic-sample.go b/src/current/_includes/v2.1/app/gorm-basic-sample.go
deleted file mode 100644
index d18948b80b2..00000000000
--- a/src/current/_includes/v2.1/app/gorm-basic-sample.go
+++ /dev/null
@@ -1,41 +0,0 @@
-package main
-
-import (
- "fmt"
- "log"
-
- // Import GORM-related packages.
- "github.com/jinzhu/gorm"
- _ "github.com/jinzhu/gorm/dialects/postgres"
-)
-
-// Account is our model, which corresponds to the "accounts" database table.
-type Account struct {
- ID int `gorm:"primary_key"`
- Balance int
-}
-
-func main() {
- // Connect to the "bank" database as the "maxroach" user.
- const addr = "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt"
- db, err := gorm.Open("postgres", addr)
- if err != nil {
- log.Fatal(err)
- }
- defer db.Close()
-
- // Automatically create the "accounts" table based on the Account model.
- db.AutoMigrate(&Account{})
-
- // Insert two rows into the "accounts" table.
- db.Create(&Account{ID: 1, Balance: 1000})
- db.Create(&Account{ID: 2, Balance: 250})
-
- // Print out the balances.
- var accounts []Account
- db.Find(&accounts)
- fmt.Println("Initial balances:")
- for _, account := range accounts {
- fmt.Printf("%d %d\n", account.ID, account.Balance)
- }
-}
diff --git a/src/current/_includes/v2.1/app/hibernate-basic-sample/Sample.java b/src/current/_includes/v2.1/app/hibernate-basic-sample/Sample.java
deleted file mode 100644
index ed36ae15ad3..00000000000
--- a/src/current/_includes/v2.1/app/hibernate-basic-sample/Sample.java
+++ /dev/null
@@ -1,64 +0,0 @@
-package com.cockroachlabs;
-
-import org.hibernate.Session;
-import org.hibernate.SessionFactory;
-import org.hibernate.cfg.Configuration;
-
-import javax.persistence.Column;
-import javax.persistence.Entity;
-import javax.persistence.Id;
-import javax.persistence.Table;
-import javax.persistence.criteria.CriteriaQuery;
-
-public class Sample {
- // Create a SessionFactory based on our hibernate.cfg.xml configuration
- // file, which defines how to connect to the database.
- private static final SessionFactory sessionFactory =
- new Configuration()
- .configure("hibernate.cfg.xml")
- .addAnnotatedClass(Account.class)
- .buildSessionFactory();
-
- // Account is our model, which corresponds to the "accounts" database table.
- @Entity
- @Table(name="accounts")
- public static class Account {
- @Id
- @Column(name="id")
- public long id;
-
- @Column(name="balance")
- public long balance;
-
- // Convenience constructor.
- public Account(int id, int balance) {
- this.id = id;
- this.balance = balance;
- }
-
- // Hibernate needs a default (no-arg) constructor to create model objects.
- public Account() {}
- }
-
- public static void main(String[] args) throws Exception {
- Session session = sessionFactory.openSession();
-
- try {
- // Insert two rows into the "accounts" table.
- session.beginTransaction();
- session.save(new Account(1, 1000));
- session.save(new Account(2, 250));
- session.getTransaction().commit();
-
- // Print out the balances.
- CriteriaQuery query = session.getCriteriaBuilder().createQuery(Account.class);
- query.select(query.from(Account.class));
- for (Account account : session.createQuery(query).getResultList()) {
- System.out.printf("%d %d\n", account.id, account.balance);
- }
- } finally {
- session.close();
- sessionFactory.close();
- }
- }
-}
diff --git a/src/current/_includes/v2.1/app/hibernate-basic-sample/build.gradle b/src/current/_includes/v2.1/app/hibernate-basic-sample/build.gradle
deleted file mode 100644
index 36f33d73fe6..00000000000
--- a/src/current/_includes/v2.1/app/hibernate-basic-sample/build.gradle
+++ /dev/null
@@ -1,16 +0,0 @@
-group 'com.cockroachlabs'
-version '1.0'
-
-apply plugin: 'java'
-apply plugin: 'application'
-
-mainClassName = 'com.cockroachlabs.Sample'
-
-repositories {
- mavenCentral()
-}
-
-dependencies {
- compile 'org.hibernate:hibernate-core:5.2.4.Final'
- compile 'org.postgresql:postgresql:42.2.2.jre7'
-}
diff --git a/src/current/_includes/v2.1/app/hibernate-basic-sample/hibernate-basic-sample.tgz b/src/current/_includes/v2.1/app/hibernate-basic-sample/hibernate-basic-sample.tgz
deleted file mode 100644
index c806749d612..00000000000
Binary files a/src/current/_includes/v2.1/app/hibernate-basic-sample/hibernate-basic-sample.tgz and /dev/null differ
diff --git a/src/current/_includes/v2.1/app/hibernate-basic-sample/hibernate.cfg.xml b/src/current/_includes/v2.1/app/hibernate-basic-sample/hibernate.cfg.xml
deleted file mode 100644
index dea7b7bd9d7..00000000000
--- a/src/current/_includes/v2.1/app/hibernate-basic-sample/hibernate.cfg.xml
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
-
-
-
-
- org.postgresql.Driver
- org.hibernate.dialect.PostgreSQL95Dialect
-
- maxroach
-
-
- create
-
-
- true
- true
-
-
diff --git a/src/current/_includes/v2.1/app/insecure/BasicSample.java b/src/current/_includes/v2.1/app/insecure/BasicSample.java
deleted file mode 100644
index 001d38feb48..00000000000
--- a/src/current/_includes/v2.1/app/insecure/BasicSample.java
+++ /dev/null
@@ -1,51 +0,0 @@
-import java.sql.*;
-import java.util.Properties;
-
-/*
- Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.
-
- Then, compile and run this example like so:
-
- $ export CLASSPATH=.:/path/to/postgresql.jar
- $ javac BasicSample.java && java BasicSample
-*/
-
-public class BasicSample {
- public static void main(String[] args)
- throws ClassNotFoundException, SQLException {
-
- // Load the Postgres JDBC driver.
- Class.forName("org.postgresql.Driver");
-
- // Connect to the "bank" database.
- Properties props = new Properties();
- props.setProperty("user", "maxroach");
- props.setProperty("sslmode", "disable");
-
- Connection db = DriverManager
- .getConnection("jdbc:postgresql://127.0.0.1:26257/bank", props);
-
- try {
- // Create the "accounts" table.
- db.createStatement()
- .execute("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)");
-
- // Insert two rows into the "accounts" table.
- db.createStatement()
- .execute("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)");
-
- // Print out the balances.
- System.out.println("Initial balances:");
- ResultSet res = db.createStatement()
- .executeQuery("SELECT id, balance FROM accounts");
- while (res.next()) {
- System.out.printf("\taccount %s: %s\n",
- res.getInt("id"),
- res.getInt("balance"));
- }
- } finally {
- // Close the database connection.
- db.close();
- }
- }
-}
diff --git a/src/current/_includes/v2.1/app/insecure/TxnSample.java b/src/current/_includes/v2.1/app/insecure/TxnSample.java
deleted file mode 100644
index 11021ec0e71..00000000000
--- a/src/current/_includes/v2.1/app/insecure/TxnSample.java
+++ /dev/null
@@ -1,145 +0,0 @@
-import java.sql.*;
-import java.util.Properties;
-
-/*
- Download the Postgres JDBC driver jar from https://jdbc.postgresql.org.
-
- Then, compile and run this example like so:
-
- $ export CLASSPATH=.:/path/to/postgresql.jar
- $ javac TxnSample.java && java TxnSample
-*/
-
-// Ambiguous whether the transaction committed or not.
-class AmbiguousCommitException extends SQLException{
- public AmbiguousCommitException(Throwable cause) {
- super(cause);
- }
-}
-
-class InsufficientBalanceException extends Exception {}
-
-class AccountNotFoundException extends Exception {
- public int account;
- public AccountNotFoundException(int account) {
- this.account = account;
- }
-}
-
-// A simple interface that provides a retryable lambda expression.
-interface RetryableTransaction {
- public void run(Connection conn)
- throws SQLException, InsufficientBalanceException,
- AccountNotFoundException, AmbiguousCommitException;
-}
-
-public class TxnSample {
- public static RetryableTransaction transferFunds(int from, int to, int amount) {
- return new RetryableTransaction() {
- public void run(Connection conn)
- throws SQLException, InsufficientBalanceException,
- AccountNotFoundException, AmbiguousCommitException {
-
- // Check the current balance.
- ResultSet res = conn.createStatement()
- .executeQuery("SELECT balance FROM accounts WHERE id = "
- + from);
- if(!res.next()) {
- throw new AccountNotFoundException(from);
- }
-
- int balance = res.getInt("balance");
- if(balance < from) {
- throw new InsufficientBalanceException();
- }
-
- // Perform the transfer.
- conn.createStatement()
- .executeUpdate("UPDATE accounts SET balance = balance - "
- + amount + " where id = " + from);
- conn.createStatement()
- .executeUpdate("UPDATE accounts SET balance = balance + "
- + amount + " where id = " + to);
- }
- };
- }
-
- public static void retryTransaction(Connection conn, RetryableTransaction tx)
- throws SQLException, InsufficientBalanceException,
- AccountNotFoundException, AmbiguousCommitException {
-
- Savepoint sp = conn.setSavepoint("cockroach_restart");
- while(true) {
- boolean releaseAttempted = false;
- try {
- tx.run(conn);
- releaseAttempted = true;
- conn.releaseSavepoint(sp);
- }
- catch(SQLException e) {
- String sqlState = e.getSQLState();
-
- // Check if the error code indicates a SERIALIZATION_FAILURE.
- if(sqlState.equals("40001")) {
- // Signal the database that we will attempt a retry.
- conn.rollback(sp);
- continue;
- } else if(releaseAttempted) {
- throw new AmbiguousCommitException(e);
- } else {
- throw e;
- }
- }
- break;
- }
- conn.commit();
- }
-
- public static void main(String[] args)
- throws ClassNotFoundException, SQLException {
-
- // Load the Postgres JDBC driver.
- Class.forName("org.postgresql.Driver");
-
- // Connect to the 'bank' database.
- Properties props = new Properties();
- props.setProperty("user", "maxroach");
- props.setProperty("sslmode", "disable");
-
- Connection db = DriverManager
- .getConnection("jdbc:postgresql://127.0.0.1:26257/bank", props);
-
-
- try {
- // We need to turn off autocommit mode to allow for
- // multi-statement transactions.
- db.setAutoCommit(false);
-
- // Perform the transfer. This assumes the 'accounts'
- // table has already been created in the database.
- RetryableTransaction transfer = transferFunds(1, 2, 100);
- retryTransaction(db, transfer);
-
- // Check balances after transfer.
- db.setAutoCommit(true);
- ResultSet res = db.createStatement()
- .executeQuery("SELECT id, balance FROM accounts");
- while (res.next()) {
- System.out.printf("\taccount %s: %s\n", res.getInt("id"),
- res.getInt("balance"));
- }
-
- } catch(InsufficientBalanceException e) {
- System.out.println("Insufficient balance");
- } catch(AccountNotFoundException e) {
- System.out.println("No users in the table with id " + e.account);
- } catch(AmbiguousCommitException e) {
- System.out.println("Ambiguous result encountered: " + e);
- } catch(SQLException e) {
- System.out.println("SQLException encountered:" + e);
- } finally {
- // Close the database connection.
- db.close();
- }
- }
-}
diff --git a/src/current/_includes/v2.1/app/insecure/activerecord-basic-sample.rb b/src/current/_includes/v2.1/app/insecure/activerecord-basic-sample.rb
deleted file mode 100644
index 601838ee789..00000000000
--- a/src/current/_includes/v2.1/app/insecure/activerecord-basic-sample.rb
+++ /dev/null
@@ -1,44 +0,0 @@
-require 'active_record'
-require 'activerecord-cockroachdb-adapter'
-require 'pg'
-
-# Connect to CockroachDB through ActiveRecord.
-# In Rails, this configuration would go in config/database.yml as usual.
-ActiveRecord::Base.establish_connection(
- adapter: 'cockroachdb',
- username: 'maxroach',
- database: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'disable'
-)
-
-# Define the Account model.
-# In Rails, this would go in app/models/ as usual.
-class Account < ActiveRecord::Base
- validates :id, presence: true
- validates :balance, presence: true
-end
-
-# Define a migration for the accounts table.
-# In Rails, this would go in db/migrate/ as usual.
-class Schema < ActiveRecord::Migration[5.0]
- def change
- create_table :accounts, force: true do |t|
- t.integer :balance
- end
- end
-end
-
-# Run the schema migration by hand.
-# In Rails, this would be done via rake db:migrate as usual.
-Schema.new.change()
-
-# Create two accounts, inserting two rows into the accounts table.
-Account.create(id: 1, balance: 1000)
-Account.create(id: 2, balance: 250)
-
-# Retrieve accounts and print out the balances
-Account.all.each do |acct|
- puts "#{acct.id} #{acct.balance}"
-end
diff --git a/src/current/_includes/v2.1/app/insecure/basic-sample.cs b/src/current/_includes/v2.1/app/insecure/basic-sample.cs
deleted file mode 100644
index b7cf8e1ff3f..00000000000
--- a/src/current/_includes/v2.1/app/insecure/basic-sample.cs
+++ /dev/null
@@ -1,50 +0,0 @@
-using System;
-using System.Data;
-using Npgsql;
-
-namespace Cockroach
-{
- class MainClass
- {
- static void Main(string[] args)
- {
- var connStringBuilder = new NpgsqlConnectionStringBuilder();
- connStringBuilder.Host = "localhost";
- connStringBuilder.Port = 26257;
- connStringBuilder.SslMode = SslMode.Disable;
- connStringBuilder.Username = "maxroach";
- connStringBuilder.Database = "bank";
- Simple(connStringBuilder.ConnectionString);
- }
-
- static void Simple(string connString)
- {
- using (var conn = new NpgsqlConnection(connString))
- {
- conn.Open();
-
- // Create the "accounts" table.
- new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery();
-
- // Insert two rows into the "accounts" table.
- using (var cmd = new NpgsqlCommand())
- {
- cmd.Connection = conn;
- cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)";
- cmd.Parameters.AddWithValue("id1", 1);
- cmd.Parameters.AddWithValue("val1", 1000);
- cmd.Parameters.AddWithValue("id2", 2);
- cmd.Parameters.AddWithValue("val2", 250);
- cmd.ExecuteNonQuery();
- }
-
- // Print out the balances.
- System.Console.WriteLine("Initial balances:");
- using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
- using (var reader = cmd.ExecuteReader())
- while (reader.Read())
- Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
- }
- }
- }
-}
diff --git a/src/current/_includes/v2.1/app/insecure/basic-sample.go b/src/current/_includes/v2.1/app/insecure/basic-sample.go
deleted file mode 100644
index 6a647f51641..00000000000
--- a/src/current/_includes/v2.1/app/insecure/basic-sample.go
+++ /dev/null
@@ -1,44 +0,0 @@
-package main
-
-import (
- "database/sql"
- "fmt"
- "log"
-
- _ "github.com/lib/pq"
-)
-
-func main() {
- // Connect to the "bank" database.
- db, err := sql.Open("postgres", "postgresql://maxroach@localhost:26257/bank?sslmode=disable")
- if err != nil {
- log.Fatal("error connecting to the database: ", err)
- }
-
- // Create the "accounts" table.
- if _, err := db.Exec(
- "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); err != nil {
- log.Fatal(err)
- }
-
- // Insert two rows into the "accounts" table.
- if _, err := db.Exec(
- "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); err != nil {
- log.Fatal(err)
- }
-
- // Print out the balances.
- rows, err := db.Query("SELECT id, balance FROM accounts")
- if err != nil {
- log.Fatal(err)
- }
- defer rows.Close()
- fmt.Println("Initial balances:")
- for rows.Next() {
- var id, balance int
- if err := rows.Scan(&id, &balance); err != nil {
- log.Fatal(err)
- }
- fmt.Printf("%d %d\n", id, balance)
- }
-}
diff --git a/src/current/_includes/v2.1/app/insecure/basic-sample.js b/src/current/_includes/v2.1/app/insecure/basic-sample.js
deleted file mode 100644
index f89ea020a74..00000000000
--- a/src/current/_includes/v2.1/app/insecure/basic-sample.js
+++ /dev/null
@@ -1,55 +0,0 @@
-var async = require('async');
-var fs = require('fs');
-var pg = require('pg');
-
-// Connect to the "bank" database.
-var config = {
- user: 'maxroach',
- host: 'localhost',
- database: 'bank',
- port: 26257
-};
-
-// Create a pool.
-var pool = new pg.Pool(config);
-
-pool.connect(function (err, client, done) {
-
- // Close communication with the database and exit.
- var finish = function () {
- done();
- process.exit();
- };
-
- if (err) {
- console.error('could not connect to cockroachdb', err);
- finish();
- }
- async.waterfall([
- function (next) {
- // Create the 'accounts' table.
- client.query('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT);', next);
- },
- function (results, next) {
- // Insert two rows into the 'accounts' table.
- client.query('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250);', next);
- },
- function (results, next) {
- // Print out account balances.
- client.query('SELECT id, balance FROM accounts;', next);
- },
- ],
- function (err, results) {
- if (err) {
- console.error('Error inserting into and selecting from accounts: ', err);
- finish();
- }
-
- console.log('Initial balances:');
- results.rows.forEach(function (row) {
- console.log(row);
- });
-
- finish();
- });
-});
diff --git a/src/current/_includes/v2.1/app/insecure/basic-sample.php b/src/current/_includes/v2.1/app/insecure/basic-sample.php
deleted file mode 100644
index cb926bc30aa..00000000000
--- a/src/current/_includes/v2.1/app/insecure/basic-sample.php
+++ /dev/null
@@ -1,20 +0,0 @@
- PDO::ERRMODE_EXCEPTION,
- PDO::ATTR_EMULATE_PREPARES => true,
- PDO::ATTR_PERSISTENT => true
- ));
-
- $dbh->exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)');
-
- print "Account balances:\r\n";
- foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) {
- print $row['id'] . ': ' . $row['balance'] . "\r\n";
- }
-} catch (Exception $e) {
- print $e->getMessage() . "\r\n";
- exit(1);
-}
-?>
diff --git a/src/current/_includes/v2.1/app/insecure/basic-sample.py b/src/current/_includes/v2.1/app/insecure/basic-sample.py
deleted file mode 100644
index db023a19e33..00000000000
--- a/src/current/_includes/v2.1/app/insecure/basic-sample.py
+++ /dev/null
@@ -1,34 +0,0 @@
-# Import the driver.
-import psycopg2
-
-# Connect to the "bank" database.
-conn = psycopg2.connect(
- database='bank',
- user='maxroach',
- sslmode='disable',
- port=26257,
- host='localhost'
-)
-
-# Make each statement commit immediately.
-conn.set_session(autocommit=True)
-
-# Open a cursor to perform database operations.
-cur = conn.cursor()
-
-# Create the "accounts" table.
-cur.execute("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)")
-
-# Insert two rows into the "accounts" table.
-cur.execute("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)")
-
-# Print out the balances.
-cur.execute("SELECT id, balance FROM accounts")
-rows = cur.fetchall()
-print('Initial balances:')
-for row in rows:
- print([str(cell) for cell in row])
-
-# Close the database connection.
-cur.close()
-conn.close()
diff --git a/src/current/_includes/v2.1/app/insecure/basic-sample.rb b/src/current/_includes/v2.1/app/insecure/basic-sample.rb
deleted file mode 100644
index 904460381f6..00000000000
--- a/src/current/_includes/v2.1/app/insecure/basic-sample.rb
+++ /dev/null
@@ -1,28 +0,0 @@
-# Import the driver.
-require 'pg'
-
-# Connect to the "bank" database.
-conn = PG.connect(
- user: 'maxroach',
- dbname: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'disable'
-)
-
-# Create the "accounts" table.
-conn.exec('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)')
-
-# Insert two rows into the "accounts" table.
-conn.exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)')
-
-# Print out the balances.
-puts 'Initial balances:'
-conn.exec('SELECT id, balance FROM accounts') do |res|
- res.each do |row|
- puts row
- end
-end
-
-# Close communication with the database.
-conn.close()
diff --git a/src/current/_includes/v2.1/app/insecure/basic-sample.rs b/src/current/_includes/v2.1/app/insecure/basic-sample.rs
deleted file mode 100644
index 8b7c3b115a9..00000000000
--- a/src/current/_includes/v2.1/app/insecure/basic-sample.rs
+++ /dev/null
@@ -1,32 +0,0 @@
-use postgres::{Client, NoTls};
-
-fn main() {
- let mut client = Client::connect("postgresql://maxroach@localhost:26257/bank", NoTls).unwrap();
-
- // Create the "accounts" table.
- client
- .execute(
- "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)",
- &[],
- )
- .unwrap();
-
- // Insert two rows into the "accounts" table.
- client
- .execute(
- "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)",
- &[],
- )
- .unwrap();
-
- // Print out the balances.
- println!("Initial balances:");
- for row in &client
- .query("SELECT id, balance FROM accounts", &[])
- .unwrap()
- {
- let id: i64 = row.get(0);
- let balance: i64 = row.get(1);
- println!("{} {}", id, balance);
- }
-}
diff --git a/src/current/_includes/v2.1/app/insecure/create-maxroach-user-and-bank-database.md b/src/current/_includes/v2.1/app/insecure/create-maxroach-user-and-bank-database.md
deleted file mode 100644
index 3c7859f0d8d..00000000000
--- a/src/current/_includes/v2.1/app/insecure/create-maxroach-user-and-bank-database.md
+++ /dev/null
@@ -1,32 +0,0 @@
-Start the [built-in SQL client](use-the-built-in-sql-client.html):
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql --insecure
-~~~
-
-In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE USER IF NOT EXISTS maxroach;
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE DATABASE bank;
-~~~
-
-Give the `maxroach` user the necessary permissions:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> GRANT ALL ON DATABASE bank TO maxroach;
-~~~
-
-Exit the SQL shell:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> \q
-~~~
diff --git a/src/current/_includes/v2.1/app/insecure/gorm-basic-sample.go b/src/current/_includes/v2.1/app/insecure/gorm-basic-sample.go
deleted file mode 100644
index b8529962c2b..00000000000
--- a/src/current/_includes/v2.1/app/insecure/gorm-basic-sample.go
+++ /dev/null
@@ -1,41 +0,0 @@
-package main
-
-import (
- "fmt"
- "log"
-
- // Import GORM-related packages.
- "github.com/jinzhu/gorm"
- _ "github.com/jinzhu/gorm/dialects/postgres"
-)
-
-// Account is our model, which corresponds to the "accounts" database table.
-type Account struct {
- ID int `gorm:"primary_key"`
- Balance int
-}
-
-func main() {
- // Connect to the "bank" database as the "maxroach" user.
- const addr = "postgresql://maxroach@localhost:26257/bank?sslmode=disable"
- db, err := gorm.Open("postgres", addr)
- if err != nil {
- log.Fatal(err)
- }
- defer db.Close()
-
- // Automatically create the "accounts" table based on the Account model.
- db.AutoMigrate(&Account{})
-
- // Insert two rows into the "accounts" table.
- db.Create(&Account{ID: 1, Balance: 1000})
- db.Create(&Account{ID: 2, Balance: 250})
-
- // Print out the balances.
- var accounts []Account
- db.Find(&accounts)
- fmt.Println("Initial balances:")
- for _, account := range accounts {
- fmt.Printf("%d %d\n", account.ID, account.Balance)
- }
-}
diff --git a/src/current/_includes/v2.1/app/insecure/hibernate-basic-sample/Sample.java b/src/current/_includes/v2.1/app/insecure/hibernate-basic-sample/Sample.java
deleted file mode 100644
index ed36ae15ad3..00000000000
--- a/src/current/_includes/v2.1/app/insecure/hibernate-basic-sample/Sample.java
+++ /dev/null
@@ -1,64 +0,0 @@
-package com.cockroachlabs;
-
-import org.hibernate.Session;
-import org.hibernate.SessionFactory;
-import org.hibernate.cfg.Configuration;
-
-import javax.persistence.Column;
-import javax.persistence.Entity;
-import javax.persistence.Id;
-import javax.persistence.Table;
-import javax.persistence.criteria.CriteriaQuery;
-
-public class Sample {
- // Create a SessionFactory based on our hibernate.cfg.xml configuration
- // file, which defines how to connect to the database.
- private static final SessionFactory sessionFactory =
- new Configuration()
- .configure("hibernate.cfg.xml")
- .addAnnotatedClass(Account.class)
- .buildSessionFactory();
-
- // Account is our model, which corresponds to the "accounts" database table.
- @Entity
- @Table(name="accounts")
- public static class Account {
- @Id
- @Column(name="id")
- public long id;
-
- @Column(name="balance")
- public long balance;
-
- // Convenience constructor.
- public Account(int id, int balance) {
- this.id = id;
- this.balance = balance;
- }
-
- // Hibernate needs a default (no-arg) constructor to create model objects.
- public Account() {}
- }
-
- public static void main(String[] args) throws Exception {
- Session session = sessionFactory.openSession();
-
- try {
- // Insert two rows into the "accounts" table.
- session.beginTransaction();
- session.save(new Account(1, 1000));
- session.save(new Account(2, 250));
- session.getTransaction().commit();
-
- // Print out the balances.
- CriteriaQuery query = session.getCriteriaBuilder().createQuery(Account.class);
- query.select(query.from(Account.class));
- for (Account account : session.createQuery(query).getResultList()) {
- System.out.printf("%d %d\n", account.id, account.balance);
- }
- } finally {
- session.close();
- sessionFactory.close();
- }
- }
-}
diff --git a/src/current/_includes/v2.1/app/insecure/hibernate-basic-sample/build.gradle b/src/current/_includes/v2.1/app/insecure/hibernate-basic-sample/build.gradle
deleted file mode 100644
index 36f33d73fe6..00000000000
--- a/src/current/_includes/v2.1/app/insecure/hibernate-basic-sample/build.gradle
+++ /dev/null
@@ -1,16 +0,0 @@
-group 'com.cockroachlabs'
-version '1.0'
-
-apply plugin: 'java'
-apply plugin: 'application'
-
-mainClassName = 'com.cockroachlabs.Sample'
-
-repositories {
- mavenCentral()
-}
-
-dependencies {
- compile 'org.hibernate:hibernate-core:5.2.4.Final'
- compile 'org.postgresql:postgresql:42.2.2.jre7'
-}
diff --git a/src/current/_includes/v2.1/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz b/src/current/_includes/v2.1/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz
deleted file mode 100644
index 5a5f73417e5..00000000000
Binary files a/src/current/_includes/v2.1/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz and /dev/null differ
diff --git a/src/current/_includes/v2.1/app/insecure/hibernate-basic-sample/hibernate.cfg.xml b/src/current/_includes/v2.1/app/insecure/hibernate-basic-sample/hibernate.cfg.xml
deleted file mode 100644
index ad27c7d746c..00000000000
--- a/src/current/_includes/v2.1/app/insecure/hibernate-basic-sample/hibernate.cfg.xml
+++ /dev/null
@@ -1,20 +0,0 @@
-
-
-
-
-
- org.postgresql.Driver
- org.hibernate.dialect.PostgreSQL95Dialect
- jdbc:postgresql://127.0.0.1:26257/bank?sslmode=disable
- maxroach
-
-
- create
-
-
- true
- true
-
-
diff --git a/src/current/_includes/v2.1/app/insecure/sequelize-basic-sample.js b/src/current/_includes/v2.1/app/insecure/sequelize-basic-sample.js
deleted file mode 100644
index ca92b98e375..00000000000
--- a/src/current/_includes/v2.1/app/insecure/sequelize-basic-sample.js
+++ /dev/null
@@ -1,35 +0,0 @@
-var Sequelize = require('sequelize-cockroachdb');
-
-// Connect to CockroachDB through Sequelize.
-var sequelize = new Sequelize('bank', 'maxroach', '', {
- dialect: 'postgres',
- port: 26257,
- logging: false
-});
-
-// Define the Account model for the "accounts" table.
-var Account = sequelize.define('accounts', {
- id: { type: Sequelize.INTEGER, primaryKey: true },
- balance: { type: Sequelize.INTEGER }
-});
-
-// Create the "accounts" table.
-Account.sync({force: true}).then(function() {
- // Insert two rows into the "accounts" table.
- return Account.bulkCreate([
- {id: 1, balance: 1000},
- {id: 2, balance: 250}
- ]);
-}).then(function() {
- // Retrieve accounts.
- return Account.findAll();
-}).then(function(accounts) {
- // Print out the balances.
- accounts.forEach(function(account) {
- console.log(account.id + ' ' + account.balance);
- });
- process.exit(0);
-}).catch(function(err) {
- console.error('error: ' + err.message);
- process.exit(1);
-});
diff --git a/src/current/_includes/v2.1/app/insecure/txn-sample.cs b/src/current/_includes/v2.1/app/insecure/txn-sample.cs
deleted file mode 100644
index f64a664ccff..00000000000
--- a/src/current/_includes/v2.1/app/insecure/txn-sample.cs
+++ /dev/null
@@ -1,120 +0,0 @@
-using System;
-using System.Data;
-using Npgsql;
-
-namespace Cockroach
-{
- class MainClass
- {
- static void Main(string[] args)
- {
- var connStringBuilder = new NpgsqlConnectionStringBuilder();
- connStringBuilder.Host = "localhost";
- connStringBuilder.Port = 26257;
- connStringBuilder.SslMode = SslMode.Disable;
- connStringBuilder.Username = "maxroach";
- connStringBuilder.Database = "bank";
- TxnSample(connStringBuilder.ConnectionString);
- }
-
- static void TransferFunds(NpgsqlConnection conn, NpgsqlTransaction tran, int from, int to, int amount)
- {
- int balance = 0;
- using (var cmd = new NpgsqlCommand(String.Format("SELECT balance FROM accounts WHERE id = {0}", from), conn, tran))
- using (var reader = cmd.ExecuteReader())
- {
- if (reader.Read())
- {
- balance = reader.GetInt32(0);
- }
- else
- {
- throw new DataException(String.Format("Account id={0} not found", from));
- }
- }
- if (balance < amount)
- {
- throw new DataException(String.Format("Insufficient balance in account id={0}", from));
- }
- using (var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance - {0} where id = {1}", amount, from), conn, tran))
- {
- cmd.ExecuteNonQuery();
- }
- using (var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance + {0} where id = {1}", amount, to), conn, tran))
- {
- cmd.ExecuteNonQuery();
- }
- }
-
- static void TxnSample(string connString)
- {
- using (var conn = new NpgsqlConnection(connString))
- {
- conn.Open();
-
- // Create the "accounts" table.
- new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery();
-
- // Insert two rows into the "accounts" table.
- using (var cmd = new NpgsqlCommand())
- {
- cmd.Connection = conn;
- cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)";
- cmd.Parameters.AddWithValue("id1", 1);
- cmd.Parameters.AddWithValue("val1", 1000);
- cmd.Parameters.AddWithValue("id2", 2);
- cmd.Parameters.AddWithValue("val2", 250);
- cmd.ExecuteNonQuery();
- }
-
- // Print out the balances.
- System.Console.WriteLine("Initial balances:");
- using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
- using (var reader = cmd.ExecuteReader())
- while (reader.Read())
- Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
-
- try
- {
- using (var tran = conn.BeginTransaction())
- {
- tran.Save("cockroach_restart");
- while (true)
- {
- try
- {
- TransferFunds(conn, tran, 1, 2, 100);
- tran.Commit();
- break;
- }
- catch (NpgsqlException e)
- {
- // Check if the error code indicates a SERIALIZATION_FAILURE.
- if (e.ErrorCode == 40001)
- {
- // Signal the database that we will attempt a retry.
- tran.Rollback("cockroach_restart");
- }
- else
- {
- throw;
- }
- }
- }
- }
- }
- catch (DataException e)
- {
- Console.WriteLine(e.Message);
- }
-
- // Now printout the results.
- Console.WriteLine("Final balances:");
- using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
- using (var reader = cmd.ExecuteReader())
- while (reader.Read())
- Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
- }
- }
- }
-}
diff --git a/src/current/_includes/v2.1/app/insecure/txn-sample.go b/src/current/_includes/v2.1/app/insecure/txn-sample.go
deleted file mode 100644
index 2c0cd1b6da6..00000000000
--- a/src/current/_includes/v2.1/app/insecure/txn-sample.go
+++ /dev/null
@@ -1,51 +0,0 @@
-package main
-
-import (
- "context"
- "database/sql"
- "fmt"
- "log"
-
- "github.com/cockroachdb/cockroach-go/crdb"
-)
-
-func transferFunds(tx *sql.Tx, from int, to int, amount int) error {
- // Read the balance.
- var fromBalance int
- if err := tx.QueryRow(
- "SELECT balance FROM accounts WHERE id = $1", from).Scan(&fromBalance); err != nil {
- return err
- }
-
- if fromBalance < amount {
- return fmt.Errorf("insufficient funds")
- }
-
- // Perform the transfer.
- if _, err := tx.Exec(
- "UPDATE accounts SET balance = balance - $1 WHERE id = $2", amount, from); err != nil {
- return err
- }
- if _, err := tx.Exec(
- "UPDATE accounts SET balance = balance + $1 WHERE id = $2", amount, to); err != nil {
- return err
- }
- return nil
-}
-
-func main() {
- db, err := sql.Open("postgres", "postgresql://maxroach@localhost:26257/bank?sslmode=disable")
- if err != nil {
- log.Fatal("error connecting to the database: ", err)
- }
-
- // Run a transfer in a transaction.
- err = crdb.ExecuteTx(context.Background(), db, nil, func(tx *sql.Tx) error {
- return transferFunds(tx, 1 /* from acct# */, 2 /* to acct# */, 100 /* amount */)
- })
- if err == nil {
- fmt.Println("Success")
- } else {
- log.Fatal("error: ", err)
- }
-}
diff --git a/src/current/_includes/v2.1/app/insecure/txn-sample.js b/src/current/_includes/v2.1/app/insecure/txn-sample.js
deleted file mode 100644
index c44309b01a2..00000000000
--- a/src/current/_includes/v2.1/app/insecure/txn-sample.js
+++ /dev/null
@@ -1,146 +0,0 @@
-var async = require('async');
-var fs = require('fs');
-var pg = require('pg');
-
-// Connect to the bank database.
-
-var config = {
- user: 'maxroach',
- host: 'localhost',
- database: 'bank',
- port: 26257
-};
-
-// Wrapper for a transaction. This automatically re-calls "op" with
-// the client as an argument as long as the database server asks for
-// the transaction to be retried.
-
-function txnWrapper(client, op, next) {
- client.query('BEGIN; SAVEPOINT cockroach_restart', function (err) {
- if (err) {
- return next(err);
- }
-
- var released = false;
- async.doWhilst(function (done) {
- var handleError = function (err) {
- // If we got an error, see if it's a retryable one
- // and, if so, restart.
- if (err.code === '40001') {
- // Signal the database that we'll retry.
- return client.query('ROLLBACK TO SAVEPOINT cockroach_restart', done);
- }
- // A non-retryable error; break out of the
- // doWhilst with an error.
- return done(err);
- };
-
- // Attempt the work.
- op(client, function (err) {
- if (err) {
- return handleError(err);
- }
- var opResults = arguments;
-
- // If we reach this point, release and commit.
- client.query('RELEASE SAVEPOINT cockroach_restart', function (err) {
- if (err) {
- return handleError(err);
- }
- released = true;
- return done.apply(null, opResults);
- });
- });
- },
- function () {
- return !released;
- },
- function (err) {
- if (err) {
- client.query('ROLLBACK', function () {
- next(err);
- });
- } else {
- var txnResults = arguments;
- client.query('COMMIT', function (err) {
- if (err) {
- return next(err);
- } else {
- return next.apply(null, txnResults);
- }
- });
- }
- });
- });
-}
-
-// The transaction we want to run.
-
-function transferFunds(client, from, to, amount, next) {
- // Check the current balance.
- client.query('SELECT balance FROM accounts WHERE id = $1', [from], function (err, results) {
- if (err) {
- return next(err);
- } else if (results.rows.length === 0) {
- return next(new Error('account not found in table'));
- }
-
- var acctBal = results.rows[0].balance;
- if (acctBal >= amount) {
- // Perform the transfer.
- async.waterfall([
- function (next) {
- // Subtract amount from account 1.
- client.query('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from], next);
- },
- function (updateResult, next) {
- // Add amount to account 2.
- client.query('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to], next);
- },
- function (updateResult, next) {
- // Fetch account balances after updates.
- client.query('SELECT id, balance FROM accounts', function (err, selectResult) {
- next(err, selectResult ? selectResult.rows : null);
- });
- }
- ], next);
- } else {
- next(new Error('insufficient funds'));
- }
- });
-}
-
-// Create a pool.
-var pool = new pg.Pool(config);
-
-pool.connect(function (err, client, done) {
- // Closes communication with the database and exits.
- var finish = function () {
- done();
- process.exit();
- };
-
- if (err) {
- console.error('could not connect to cockroachdb', err);
- finish();
- }
-
- // Execute the transaction.
- txnWrapper(client,
- function (client, next) {
- transferFunds(client, 1, 2, 100, next);
- },
- function (err, results) {
- if (err) {
- console.error('error performing transaction', err);
- finish();
- }
-
- console.log('Balances after transfer:');
- results.forEach(function (result) {
- console.log(result);
- });
-
- finish();
- });
-});
diff --git a/src/current/_includes/v2.1/app/insecure/txn-sample.php b/src/current/_includes/v2.1/app/insecure/txn-sample.php
deleted file mode 100644
index e060d311cc3..00000000000
--- a/src/current/_includes/v2.1/app/insecure/txn-sample.php
+++ /dev/null
@@ -1,71 +0,0 @@
-beginTransaction();
- // This savepoint allows us to retry our transaction.
- $dbh->exec("SAVEPOINT cockroach_restart");
- } catch (Exception $e) {
- throw $e;
- }
-
- while (true) {
- try {
- $stmt = $dbh->prepare(
- 'UPDATE accounts SET balance = balance + :deposit ' .
- 'WHERE id = :account AND (:deposit > 0 OR balance + :deposit >= 0)');
-
- // First, withdraw the money from the old account (if possible).
- $stmt->bindValue(':account', $from, PDO::PARAM_INT);
- $stmt->bindValue(':deposit', -$amount, PDO::PARAM_INT);
- $stmt->execute();
- if ($stmt->rowCount() == 0) {
- print "source account does not exist or is underfunded\r\n";
- return;
- }
-
- // Next, deposit into the new account (if it exists).
- $stmt->bindValue(':account', $to, PDO::PARAM_INT);
- $stmt->bindValue(':deposit', $amount, PDO::PARAM_INT);
- $stmt->execute();
- if ($stmt->rowCount() == 0) {
- print "destination account does not exist\r\n";
- return;
- }
-
- // Attempt to release the savepoint (which is really the commit).
- $dbh->exec('RELEASE SAVEPOINT cockroach_restart');
- $dbh->commit();
- return;
- } catch (PDOException $e) {
- if ($e->getCode() != '40001') {
- // Non-recoverable error. Rollback and bubble error up the chain.
- $dbh->rollBack();
- throw $e;
- } else {
- // Cockroach transaction retry code. Rollback to the savepoint and
- // restart.
- $dbh->exec('ROLLBACK TO SAVEPOINT cockroach_restart');
- }
- }
- }
-}
-
-try {
- $dbh = new PDO('pgsql:host=localhost;port=26257;dbname=bank;sslmode=disable',
- 'maxroach', null, array(
- PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
- PDO::ATTR_EMULATE_PREPARES => true,
- ));
-
- transferMoney($dbh, 1, 2, 10);
-
- print "Account balances after transfer:\r\n";
- foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) {
- print $row['id'] . ': ' . $row['balance'] . "\r\n";
- }
-} catch (Exception $e) {
- print $e->getMessage() . "\r\n";
- exit(1);
-}
-?>
diff --git a/src/current/_includes/v2.1/app/insecure/txn-sample.py b/src/current/_includes/v2.1/app/insecure/txn-sample.py
deleted file mode 100644
index 2ea05a85704..00000000000
--- a/src/current/_includes/v2.1/app/insecure/txn-sample.py
+++ /dev/null
@@ -1,73 +0,0 @@
-# Import the driver.
-import psycopg2
-import psycopg2.errorcodes
-
-# Connect to the cluster.
-conn = psycopg2.connect(
- database='bank',
- user='maxroach',
- sslmode='disable',
- port=26257,
- host='localhost'
-)
-
-def onestmt(conn, sql):
- with conn.cursor() as cur:
- cur.execute(sql)
-
-
-# Wrapper for a transaction.
-# This automatically re-calls "op" with the open transaction as an argument
-# as long as the database server asks for the transaction to be retried.
-def run_transaction(conn, op):
- with conn:
- onestmt(conn, "SAVEPOINT cockroach_restart")
- while True:
- try:
- # Attempt the work.
- op(conn)
-
- # If we reach this point, commit.
- onestmt(conn, "RELEASE SAVEPOINT cockroach_restart")
- break
-
- except psycopg2.OperationalError as e:
- if e.pgcode != psycopg2.errorcodes.SERIALIZATION_FAILURE:
- # A non-retryable error; report this up the call stack.
- raise e
- # Signal the database that we'll retry.
- onestmt(conn, "ROLLBACK TO SAVEPOINT cockroach_restart")
-
-
-# The transaction we want to run.
-def transfer_funds(txn, frm, to, amount):
- with txn.cursor() as cur:
-
- # Check the current balance.
- cur.execute("SELECT balance FROM accounts WHERE id = " + str(frm))
- from_balance = cur.fetchone()[0]
- if from_balance < amount:
- raise "Insufficient funds"
-
- # Perform the transfer.
- cur.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s",
- (amount, frm))
- cur.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s",
- (amount, to))
-
-
-# Execute the transaction.
-run_transaction(conn, lambda conn: transfer_funds(conn, 1, 2, 100))
-
-
-with conn:
- with conn.cursor() as cur:
- # Check account balances.
- cur.execute("SELECT id, balance FROM accounts")
- rows = cur.fetchall()
- print('Balances after transfer:')
- for row in rows:
- print([str(cell) for cell in row])
-
-# Close communication with the database.
-conn.close()
diff --git a/src/current/_includes/v2.1/app/insecure/txn-sample.rb b/src/current/_includes/v2.1/app/insecure/txn-sample.rb
deleted file mode 100644
index 416efb9e24d..00000000000
--- a/src/current/_includes/v2.1/app/insecure/txn-sample.rb
+++ /dev/null
@@ -1,49 +0,0 @@
-# Import the driver.
-require 'pg'
-
-# Wrapper for a transaction.
-# This automatically re-calls "op" with the open transaction as an argument
-# as long as the database server asks for the transaction to be retried.
-def run_transaction(conn)
- conn.transaction do |txn|
- txn.exec('SAVEPOINT cockroach_restart')
- while
- begin
- # Attempt the work.
- yield txn
-
- # If we reach this point, commit.
- txn.exec('RELEASE SAVEPOINT cockroach_restart')
- break
- rescue PG::TRSerializationFailure
- txn.exec('ROLLBACK TO SAVEPOINT cockroach_restart')
- end
- end
- end
-end
-
-def transfer_funds(txn, from, to, amount)
- txn.exec_params('SELECT balance FROM accounts WHERE id = $1', [from]) do |res|
- res.each do |row|
- raise 'insufficient funds' if Integer(row['balance']) < amount
- end
- end
- txn.exec_params('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from])
- txn.exec_params('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to])
-end
-
-# Connect to the "bank" database.
-conn = PG.connect(
- user: 'maxroach',
- dbname: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'disable'
-)
-
-run_transaction(conn) do |txn|
- transfer_funds(txn, 1, 2, 100)
-end
-
-# Close communication with the database.
-conn.close()
diff --git a/src/current/_includes/v2.1/app/insecure/txn-sample.rs b/src/current/_includes/v2.1/app/insecure/txn-sample.rs
deleted file mode 100644
index d1dd0e021c9..00000000000
--- a/src/current/_includes/v2.1/app/insecure/txn-sample.rs
+++ /dev/null
@@ -1,60 +0,0 @@
-use postgres::{error::SqlState, Client, Error, NoTls, Transaction};
-
-/// Runs op inside a transaction and retries it as needed.
-/// On non-retryable failures, the transaction is aborted and
-/// rolled back; on success, the transaction is committed.
-fn execute_txn(client: &mut Client, op: F) -> Result
-where
- F: Fn(&mut Transaction) -> Result,
-{
- let mut txn = client.transaction()?;
- loop {
- let mut sp = txn.savepoint("cockroach_restart")?;
- match op(&mut sp).and_then(|t| sp.commit().map(|_| t)) {
- Err(ref err)
- if err
- .code()
- .map(|e| *e == SqlState::T_R_SERIALIZATION_FAILURE)
- .unwrap_or(false) => {}
- r => break r,
- }
- }
- .and_then(|t| txn.commit().map(|_| t))
-}
-
-fn transfer_funds(txn: &mut Transaction, from: i64, to: i64, amount: i64) -> Result<(), Error> {
- // Read the balance.
- let from_balance: i64 = txn
- .query_one("SELECT balance FROM accounts WHERE id = $1", &[&from])?
- .get(0);
-
- assert!(from_balance >= amount);
-
- // Perform the transfer.
- txn.execute(
- "UPDATE accounts SET balance = balance - $1 WHERE id = $2",
- &[&amount, &from],
- )?;
- txn.execute(
- "UPDATE accounts SET balance = balance + $1 WHERE id = $2",
- &[&amount, &to],
- )?;
- Ok(())
-}
-
-fn main() {
- let mut client = Client::connect("postgresql://maxroach@localhost:26257/bank", NoTls).unwrap();
-
- // Run a transfer in a transaction.
- execute_txn(&mut client, |txn| transfer_funds(txn, 1, 2, 100)).unwrap();
-
- // Check account balances after the transaction.
- for row in &client
- .query("SELECT id, balance FROM accounts", &[])
- .unwrap()
- {
- let id: i64 = row.get(0);
- let balance: i64 = row.get(1);
- println!("{} {}", id, balance);
- }
-}
diff --git a/src/current/_includes/v2.1/app/project.clj b/src/current/_includes/v2.1/app/project.clj
deleted file mode 100644
index 41efc324b59..00000000000
--- a/src/current/_includes/v2.1/app/project.clj
+++ /dev/null
@@ -1,7 +0,0 @@
-(defproject test "0.1"
- :description "CockroachDB test"
- :url "http://cockroachlabs.com/"
- :dependencies [[org.clojure/clojure "1.8.0"]
- [org.clojure/java.jdbc "0.6.1"]
- [org.postgresql/postgresql "9.4.1211"]]
- :main test.test)
diff --git a/src/current/_includes/v2.1/app/see-also-links.md b/src/current/_includes/v2.1/app/see-also-links.md
deleted file mode 100644
index 90f06751e13..00000000000
--- a/src/current/_includes/v2.1/app/see-also-links.md
+++ /dev/null
@@ -1,9 +0,0 @@
-You might also be interested in using a local cluster to explore the following CockroachDB benefits:
-
-- [Client Connection Parameters](connection-parameters.html)
-- [Data Replication](demo-data-replication.html)
-- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
-- [Automatic Rebalancing](demo-automatic-rebalancing.html)
-- [Cross-Cloud Migration](demo-automatic-cloud-migration.html)
-- [Follow-the-Workload](demo-follow-the-workload.html)
-- [Automated Operations](orchestrate-a-local-cluster-with-kubernetes-insecure.html)
diff --git a/src/current/_includes/v2.1/app/sequelize-basic-sample.js b/src/current/_includes/v2.1/app/sequelize-basic-sample.js
deleted file mode 100644
index d87ff2ca5a5..00000000000
--- a/src/current/_includes/v2.1/app/sequelize-basic-sample.js
+++ /dev/null
@@ -1,62 +0,0 @@
-var Sequelize = require('sequelize-cockroachdb');
-var fs = require('fs');
-
-// Connect to CockroachDB through Sequelize.
-var sequelize = new Sequelize('bank', 'maxroach', '', {
- dialect: 'postgres',
- port: 26257,
- logging: false,
- dialectOptions: {
- ssl: {
- ca: fs.readFileSync('certs/ca.crt')
- .toString(),
- key: fs.readFileSync('certs/client.maxroach.key')
- .toString(),
- cert: fs.readFileSync('certs/client.maxroach.crt')
- .toString()
- }
- }
-});
-
-// Define the Account model for the "accounts" table.
-var Account = sequelize.define('accounts', {
- id: {
- type: Sequelize.INTEGER,
- primaryKey: true
- },
- balance: {
- type: Sequelize.INTEGER
- }
-});
-
-// Create the "accounts" table.
-Account.sync({
- force: true
- })
- .then(function () {
- // Insert two rows into the "accounts" table.
- return Account.bulkCreate([{
- id: 1,
- balance: 1000
- },
- {
- id: 2,
- balance: 250
- }
- ]);
- })
- .then(function () {
- // Retrieve accounts.
- return Account.findAll();
- })
- .then(function (accounts) {
- // Print out the balances.
- accounts.forEach(function (account) {
- console.log(account.id + ' ' + account.balance);
- });
- process.exit(0);
- })
- .catch(function (err) {
- console.error('error: ' + err.message);
- process.exit(1);
- });
diff --git a/src/current/_includes/v2.1/app/sqlalchemy-basic-sample.py b/src/current/_includes/v2.1/app/sqlalchemy-basic-sample.py
deleted file mode 100644
index 1b8801c5173..00000000000
--- a/src/current/_includes/v2.1/app/sqlalchemy-basic-sample.py
+++ /dev/null
@@ -1,110 +0,0 @@
-import random
-from math import floor
-from sqlalchemy import create_engine, Column, Integer
-from sqlalchemy.ext.declarative import declarative_base
-from sqlalchemy.orm import sessionmaker
-from cockroachdb.sqlalchemy import run_transaction
-
-Base = declarative_base()
-
-
-# The Account class corresponds to the "accounts" database table.
-class Account(Base):
- __tablename__ = 'accounts'
- id = Column(Integer, primary_key=True)
- balance = Column(Integer)
-
-
-# Create an engine to communicate with the database. The
-# "cockroachdb://" prefix for the engine URL indicates that we are
-# connecting to CockroachDB using the 'cockroachdb' dialect.
-# For more information, see
-# https://github.com/cockroachdb/sqlalchemy-cockroachdb.
-
-secure_cluster = True # Set to False for insecure clusters
-connect_args = {}
-
-if secure_cluster:
- connect_args = {
- 'sslmode': 'require',
- 'sslrootcert': 'certs/ca.crt',
- 'sslkey': 'certs/client.maxroach.key',
- 'sslcert': 'certs/client.maxroach.crt'
- }
-else:
- connect_args = {'sslmode': 'disable'}
-
-engine = create_engine(
- 'cockroachdb://maxroach@localhost:26257/bank',
- connect_args=connect_args,
- echo=True # Log SQL queries to stdout
-)
-
-# Automatically create the "accounts" table based on the Account class.
-Base.metadata.create_all(engine)
-
-
-# Store the account IDs we create for later use.
-
-seen_account_ids = set()
-
-
-# The code below generates random IDs for new accounts.
-
-def create_random_accounts(sess, n):
- """Create N new accounts with random IDs and random account balances.
-
- Note that since this is a demo, we do not do any work to ensure the
- new IDs do not collide with existing IDs.
- """
- new_accounts = []
- elems = iter(range(n))
- for i in elems:
- billion = 1000000000
- new_id = floor(random.random()*billion)
- seen_account_ids.add(new_id)
- new_accounts.append(
- Account(
- id=new_id,
- balance=floor(random.random()*1000000)
- )
- )
- sess.add_all(new_accounts)
-
-
-run_transaction(sessionmaker(bind=engine),
- lambda s: create_random_accounts(s, 100))
-
-
-# Helper for getting random existing account IDs.
-
-def get_random_account_id():
- id = random.choice(tuple(seen_account_ids))
- return id
-
-
-def transfer_funds_randomly(session):
- """Transfer money randomly between accounts (during SESSION).
-
- Cuts a randomly selected account's balance in half, and gives the
- other half to some other randomly selected account.
- """
- source_id = get_random_account_id()
- sink_id = get_random_account_id()
-
- source = session.query(Account).filter_by(id=source_id).one()
- amount = floor(source.balance/2)
-
- # Check balance of the first account.
- if source.balance < amount:
- raise "Insufficient funds"
-
- source.balance -= amount
- session.query(Account).filter_by(id=sink_id).update(
- {"balance": (Account.balance + amount)}
- )
-
-
-# Run the transfer inside a transaction.
-
-run_transaction(sessionmaker(bind=engine), transfer_funds_randomly)
diff --git a/src/current/_includes/v2.1/app/sqlalchemy-large-txns.py b/src/current/_includes/v2.1/app/sqlalchemy-large-txns.py
deleted file mode 100644
index bc7399b663c..00000000000
--- a/src/current/_includes/v2.1/app/sqlalchemy-large-txns.py
+++ /dev/null
@@ -1,60 +0,0 @@
-from sqlalchemy import create_engine, Column, Float, Integer
-from sqlalchemy.ext.declarative import declarative_base
-from sqlalchemy.orm import sessionmaker
-from cockroachdb.sqlalchemy import run_transaction
-from random import random
-
-Base = declarative_base()
-
-# The code below assumes you are running as 'root' and have run
-# the following SQL statements against an insecure cluster.
-
-# CREATE DATABASE pointstore;
-
-# USE pointstore;
-
-# CREATE TABLE points (
-# id INT PRIMARY KEY DEFAULT unique_rowid(),
-# x FLOAT NOT NULL,
-# y FLOAT NOT NULL,
-# z FLOAT NOT NULL
-# );
-
-engine = create_engine(
- 'cockroachdb://root@localhost:26257/pointstore',
- connect_args={
- 'sslmode': 'disable',
- },
- echo=True
-)
-
-
-class Point(Base):
- __tablename__ = 'points'
- id = Column(Integer, primary_key=True)
- x = Column(Float)
- y = Column(Float)
- z = Column(Float)
-
-
-def add_points(num_points):
- chunk_size = 1000 # Tune this based on object sizes.
-
- def add_points_helper(sess, chunk, num_points):
- points = []
- for i in range(chunk, min(chunk + chunk_size, num_points)):
- points.append(
- Point(x=random()*1024, y=random()*1024, z=random()*1024)
- )
- sess.bulk_save_objects(points)
-
- for chunk in range(0, num_points, chunk_size):
- run_transaction(
- sessionmaker(bind=engine),
- lambda s: add_points_helper(
- s, chunk, min(chunk + chunk_size, num_points)
- )
- )
-
-
-add_points(10000)
diff --git a/src/current/_includes/v2.1/app/txn-sample.clj b/src/current/_includes/v2.1/app/txn-sample.clj
deleted file mode 100644
index 75ee7b4ba62..00000000000
--- a/src/current/_includes/v2.1/app/txn-sample.clj
+++ /dev/null
@@ -1,43 +0,0 @@
-(ns test.test
- (:require [clojure.java.jdbc :as j]
- [test.util :as util]))
-
-;; Define the connection parameters to the cluster.
-(def db-spec {:subprotocol "postgresql"
- :subname "//localhost:26257/bank"
- :user "maxroach"
- :password ""})
-
-;; The transaction we want to run.
-(defn transferFunds
- [txn from to amount]
-
- ;; Check the current balance.
- (let [fromBalance (->> (j/query txn ["SELECT balance FROM accounts WHERE id = ?" from])
- (mapv :balance)
- (first))]
- (when (< fromBalance amount)
- (throw (Exception. "Insufficient funds"))))
-
- ;; Perform the transfer.
- (j/execute! txn [(str "UPDATE accounts SET balance = balance - " amount " WHERE id = " from)])
- (j/execute! txn [(str "UPDATE accounts SET balance = balance + " amount " WHERE id = " to)]))
-
-(defn test-txn []
- ;; Connect to the cluster and run the code below with
- ;; the connection object bound to 'conn'.
- (j/with-db-connection [conn db-spec]
-
- ;; Execute the transaction within an automatic retry block;
- ;; the transaction object is bound to 'txn'.
- (util/with-txn-retry [txn conn]
- (transferFunds txn 1 2 100))
-
- ;; Execute a query outside of an automatic retry block.
- (println "Balances after transfer:")
- (->> (j/query conn ["SELECT id, balance FROM accounts"])
- (map println)
- (doall))))
-
-(defn -main [& args]
- (test-txn))
diff --git a/src/current/_includes/v2.1/app/txn-sample.cpp b/src/current/_includes/v2.1/app/txn-sample.cpp
deleted file mode 100644
index dcdf0ca973d..00000000000
--- a/src/current/_includes/v2.1/app/txn-sample.cpp
+++ /dev/null
@@ -1,76 +0,0 @@
-// Build with g++ -std=c++11 txn-sample.cpp -lpq -lpqxx
-
-#include
-#include
-#include
-#include
-#include
-#include
-
-using namespace std;
-
-void transferFunds(
- pqxx::dbtransaction *tx, int from, int to, int amount) {
- // Read the balance.
- pqxx::result r = tx->exec(
- "SELECT balance FROM accounts WHERE id = " + to_string(from));
- assert(r.size() == 1);
- int fromBalance = r[0][0].as();
-
- if (fromBalance < amount) {
- throw domain_error("insufficient funds");
- }
-
- // Perform the transfer.
- tx->exec("UPDATE accounts SET balance = balance - "
- + to_string(amount) + " WHERE id = " + to_string(from));
- tx->exec("UPDATE accounts SET balance = balance + "
- + to_string(amount) + " WHERE id = " + to_string(to));
-}
-
-
-// ExecuteTx runs fn inside a transaction and retries it as needed.
-// On non-retryable failures, the transaction is aborted and rolled
-// back; on success, the transaction is committed.
-//
-// For more information about CockroachDB's transaction model see
-// https://cockroachlabs.com/docs/transactions.html.
-//
-// NOTE: the supplied exec closure should not have external side
-// effects beyond changes to the database.
-void executeTx(
- pqxx::connection *c, function fn) {
- pqxx::work tx(*c);
- while (true) {
- try {
- pqxx::subtransaction s(tx, "cockroach_restart");
- fn(&s);
- s.commit();
- break;
- } catch (const pqxx::pqxx_exception& e) {
- // Swallow "transaction restart" errors; the transaction will be retried.
- // Unfortunately libpqxx doesn't give us access to the error code, so we
- // do string matching to identify retriable errors.
- if (string(e.base().what()).find("restart transaction:") == string::npos) {
- throw;
- }
- }
- }
- tx.commit();
-}
-
-int main() {
- try {
- pqxx::connection c("postgresql://maxroach@localhost:26257/bank");
-
- executeTx(&c, [](pqxx::dbtransaction *tx) {
- transferFunds(tx, 1, 2, 100);
- });
- }
- catch (const exception &e) {
- cerr << e.what() << endl;
- return 1;
- }
- cout << "Success" << endl;
- return 0;
-}
diff --git a/src/current/_includes/v2.1/app/txn-sample.cs b/src/current/_includes/v2.1/app/txn-sample.cs
deleted file mode 100644
index 54d0c1c4f3d..00000000000
--- a/src/current/_includes/v2.1/app/txn-sample.cs
+++ /dev/null
@@ -1,168 +0,0 @@
-using System;
-using System.Data;
-using System.Security.Cryptography.X509Certificates;
-using System.Net.Security;
-using Npgsql;
-
-namespace Cockroach
-{
- class MainClass
- {
- static void Main(string[] args)
- {
- var connStringBuilder = new NpgsqlConnectionStringBuilder();
- connStringBuilder.Host = "localhost";
- connStringBuilder.Port = 26257;
- connStringBuilder.SslMode = SslMode.Require;
- connStringBuilder.Username = "maxroach";
- connStringBuilder.Database = "bank";
- TxnSample(connStringBuilder.ConnectionString);
- }
-
- static void TransferFunds(NpgsqlConnection conn, NpgsqlTransaction tran, int from, int to, int amount)
- {
- int balance = 0;
- using (var cmd = new NpgsqlCommand(String.Format("SELECT balance FROM accounts WHERE id = {0}", from), conn, tran))
- using (var reader = cmd.ExecuteReader())
- {
- if (reader.Read())
- {
- balance = reader.GetInt32(0);
- }
- else
- {
- throw new DataException(String.Format("Account id={0} not found", from));
- }
- }
- if (balance < amount)
- {
- throw new DataException(String.Format("Insufficient balance in account id={0}", from));
- }
- using (var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance - {0} where id = {1}", amount, from), conn, tran))
- {
- cmd.ExecuteNonQuery();
- }
- using (var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance + {0} where id = {1}", amount, to), conn, tran))
- {
- cmd.ExecuteNonQuery();
- }
- }
-
- static void TxnSample(string connString)
- {
- using (var conn = new NpgsqlConnection(connString))
- {
- conn.ProvideClientCertificatesCallback += ProvideClientCertificatesCallback;
- conn.UserCertificateValidationCallback += UserCertificateValidationCallback;
-
- conn.Open();
-
- // Create the "accounts" table.
- new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery();
-
- // Insert two rows into the "accounts" table.
- using (var cmd = new NpgsqlCommand())
- {
- cmd.Connection = conn;
- cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)";
- cmd.Parameters.AddWithValue("id1", 1);
- cmd.Parameters.AddWithValue("val1", 1000);
- cmd.Parameters.AddWithValue("id2", 2);
- cmd.Parameters.AddWithValue("val2", 250);
- cmd.ExecuteNonQuery();
- }
-
- // Print out the balances.
- System.Console.WriteLine("Initial balances:");
- using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
- using (var reader = cmd.ExecuteReader())
- while (reader.Read())
- Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
-
- try
- {
- using (var tran = conn.BeginTransaction())
- {
- tran.Save("cockroach_restart");
- while (true)
- {
- try
- {
- TransferFunds(conn, tran, 1, 2, 100);
- tran.Commit();
- break;
- }
- catch (NpgsqlException e)
- {
- // Check if the error code indicates a SERIALIZATION_FAILURE.
- if (e.ErrorCode == 40001)
- {
- // Signal the database that we will attempt a retry.
- tran.Rollback("cockroach_restart");
- }
- else
- {
- throw;
- }
- }
- }
- }
- }
- catch (DataException e)
- {
- Console.WriteLine(e.Message);
- }
-
- // Now printout the results.
- Console.WriteLine("Final balances:");
- using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn))
- using (var reader = cmd.ExecuteReader())
- while (reader.Read())
- Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1));
- }
- }
-
- static void ProvideClientCertificatesCallback(X509CertificateCollection clientCerts)
- {
- // To be able to add a certificate with a private key included, we must convert it to
- // a PKCS #12 format. The following openssl command does this:
- // openssl pkcs12 -inkey client.maxroach.key -in client.maxroach.crt -export -out client.maxroach.pfx
- // As of 2018-12-10, you need to provide a password for this to work on macOS.
- // See https://github.com/dotnet/corefx/issues/24225
- clientCerts.Add(new X509Certificate2("certs/client.maxroach.pfx", "pass"));
- }
-
- // By default, .Net does all of its certificate verification using the system certificate store.
- // This callback is necessary to validate the server certificate against a CA certificate file.
- static bool UserCertificateValidationCallback(object sender, X509Certificate certificate, X509Chain defaultChain, SslPolicyErrors defaultErrors)
- {
- X509Certificate2 caCert = new X509Certificate2("certs/ca.crt");
- X509Chain caCertChain = new X509Chain();
- caCertChain.ChainPolicy = new X509ChainPolicy()
- {
- RevocationMode = X509RevocationMode.NoCheck,
- RevocationFlag = X509RevocationFlag.EntireChain
- };
- caCertChain.ChainPolicy.ExtraStore.Add(caCert);
-
- X509Certificate2 serverCert = new X509Certificate2(certificate);
-
- caCertChain.Build(serverCert);
- if (caCertChain.ChainStatus.Length == 0)
- {
- // No errors
- return true;
- }
-
- foreach (X509ChainStatus status in caCertChain.ChainStatus)
- {
- // Check if we got any errors other than UntrustedRoot (which we will always get if we do not install the CA cert to the system store)
- if (status.Status != X509ChainStatusFlags.UntrustedRoot)
- {
- return false;
- }
- }
- return true;
- }
- }
-}
diff --git a/src/current/_includes/v2.1/app/txn-sample.go b/src/current/_includes/v2.1/app/txn-sample.go
deleted file mode 100644
index fc15275abca..00000000000
--- a/src/current/_includes/v2.1/app/txn-sample.go
+++ /dev/null
@@ -1,53 +0,0 @@
-package main
-
-import (
- "context"
- "database/sql"
- "fmt"
- "log"
-
- "github.com/cockroachdb/cockroach-go/crdb"
-)
-
-func transferFunds(tx *sql.Tx, from int, to int, amount int) error {
- // Read the balance.
- var fromBalance int
- if err := tx.QueryRow(
- "SELECT balance FROM accounts WHERE id = $1", from).Scan(&fromBalance); err != nil {
- return err
- }
-
- if fromBalance < amount {
- return fmt.Errorf("insufficient funds")
- }
-
- // Perform the transfer.
- if _, err := tx.Exec(
- "UPDATE accounts SET balance = balance - $1 WHERE id = $2", amount, from); err != nil {
- return err
- }
- if _, err := tx.Exec(
- "UPDATE accounts SET balance = balance + $1 WHERE id = $2", amount, to); err != nil {
- return err
- }
- return nil
-}
-
-func main() {
- db, err := sql.Open("postgres",
- "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt")
- if err != nil {
- log.Fatal("error connecting to the database: ", err)
- }
- defer db.Close()
-
- // Run a transfer in a transaction.
- err = crdb.ExecuteTx(context.Background(), db, nil, func(tx *sql.Tx) error {
- return transferFunds(tx, 1 /* from acct# */, 2 /* to acct# */, 100 /* amount */)
- })
- if err == nil {
- fmt.Println("Success")
- } else {
- log.Fatal("error: ", err)
- }
-}
diff --git a/src/current/_includes/v2.1/app/txn-sample.js b/src/current/_includes/v2.1/app/txn-sample.js
deleted file mode 100644
index 1eebaacad30..00000000000
--- a/src/current/_includes/v2.1/app/txn-sample.js
+++ /dev/null
@@ -1,154 +0,0 @@
-var async = require('async');
-var fs = require('fs');
-var pg = require('pg');
-
-// Connect to the bank database.
-
-var config = {
- user: 'maxroach',
- host: 'localhost',
- database: 'bank',
- port: 26257,
- ssl: {
- ca: fs.readFileSync('certs/ca.crt')
- .toString(),
- key: fs.readFileSync('certs/client.maxroach.key')
- .toString(),
- cert: fs.readFileSync('certs/client.maxroach.crt')
- .toString()
- }
-};
-
-// Wrapper for a transaction. This automatically re-calls "op" with
-// the client as an argument as long as the database server asks for
-// the transaction to be retried.
-
-function txnWrapper(client, op, next) {
- client.query('BEGIN; SAVEPOINT cockroach_restart', function (err) {
- if (err) {
- return next(err);
- }
-
- var released = false;
- async.doWhilst(function (done) {
- var handleError = function (err) {
- // If we got an error, see if it's a retryable one
- // and, if so, restart.
- if (err.code === '40001') {
- // Signal the database that we'll retry.
- return client.query('ROLLBACK TO SAVEPOINT cockroach_restart', done);
- }
- // A non-retryable error; break out of the
- // doWhilst with an error.
- return done(err);
- };
-
- // Attempt the work.
- op(client, function (err) {
- if (err) {
- return handleError(err);
- }
- var opResults = arguments;
-
- // If we reach this point, release and commit.
- client.query('RELEASE SAVEPOINT cockroach_restart', function (err) {
- if (err) {
- return handleError(err);
- }
- released = true;
- return done.apply(null, opResults);
- });
- });
- },
- function () {
- return !released;
- },
- function (err) {
- if (err) {
- client.query('ROLLBACK', function () {
- next(err);
- });
- } else {
- var txnResults = arguments;
- client.query('COMMIT', function (err) {
- if (err) {
- return next(err);
- } else {
- return next.apply(null, txnResults);
- }
- });
- }
- });
- });
-}
-
-// The transaction we want to run.
-
-function transferFunds(client, from, to, amount, next) {
- // Check the current balance.
- client.query('SELECT balance FROM accounts WHERE id = $1', [from], function (err, results) {
- if (err) {
- return next(err);
- } else if (results.rows.length === 0) {
- return next(new Error('account not found in table'));
- }
-
- var acctBal = results.rows[0].balance;
- if (acctBal >= amount) {
- // Perform the transfer.
- async.waterfall([
- function (next) {
- // Subtract amount from account 1.
- client.query('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from], next);
- },
- function (updateResult, next) {
- // Add amount to account 2.
- client.query('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to], next);
- },
- function (updateResult, next) {
- // Fetch account balances after updates.
- client.query('SELECT id, balance FROM accounts', function (err, selectResult) {
- next(err, selectResult ? selectResult.rows : null);
- });
- }
- ], next);
- } else {
- next(new Error('insufficient funds'));
- }
- });
-}
-
-// Create a pool.
-var pool = new pg.Pool(config);
-
-pool.connect(function (err, client, done) {
- // Closes communication with the database and exits.
- var finish = function () {
- done();
- process.exit();
- };
-
- if (err) {
- console.error('could not connect to cockroachdb', err);
- finish();
- }
-
- // Execute the transaction.
- txnWrapper(client,
- function (client, next) {
- transferFunds(client, 1, 2, 100, next);
- },
- function (err, results) {
- if (err) {
- console.error('error performing transaction', err);
- finish();
- }
-
- console.log('Balances after transfer:');
- results.forEach(function (result) {
- console.log(result);
- });
-
- finish();
- });
-});
diff --git a/src/current/_includes/v2.1/app/txn-sample.php b/src/current/_includes/v2.1/app/txn-sample.php
deleted file mode 100644
index 363dbcd73cd..00000000000
--- a/src/current/_includes/v2.1/app/txn-sample.php
+++ /dev/null
@@ -1,71 +0,0 @@
-beginTransaction();
- // This savepoint allows us to retry our transaction.
- $dbh->exec("SAVEPOINT cockroach_restart");
- } catch (Exception $e) {
- throw $e;
- }
-
- while (true) {
- try {
- $stmt = $dbh->prepare(
- 'UPDATE accounts SET balance = balance + :deposit ' .
- 'WHERE id = :account AND (:deposit > 0 OR balance + :deposit >= 0)');
-
- // First, withdraw the money from the old account (if possible).
- $stmt->bindValue(':account', $from, PDO::PARAM_INT);
- $stmt->bindValue(':deposit', -$amount, PDO::PARAM_INT);
- $stmt->execute();
- if ($stmt->rowCount() == 0) {
- print "source account does not exist or is underfunded\r\n";
- return;
- }
-
- // Next, deposit into the new account (if it exists).
- $stmt->bindValue(':account', $to, PDO::PARAM_INT);
- $stmt->bindValue(':deposit', $amount, PDO::PARAM_INT);
- $stmt->execute();
- if ($stmt->rowCount() == 0) {
- print "destination account does not exist\r\n";
- return;
- }
-
- // Attempt to release the savepoint (which is really the commit).
- $dbh->exec('RELEASE SAVEPOINT cockroach_restart');
- $dbh->commit();
- return;
- } catch (PDOException $e) {
- if ($e->getCode() != '40001') {
- // Non-recoverable error. Rollback and bubble error up the chain.
- $dbh->rollBack();
- throw $e;
- } else {
- // Cockroach transaction retry code. Rollback to the savepoint and
- // restart.
- $dbh->exec('ROLLBACK TO SAVEPOINT cockroach_restart');
- }
- }
- }
-}
-
-try {
- $dbh = new PDO('pgsql:host=localhost;port=26257;dbname=bank;sslmode=require;sslrootcert=certs/ca.crt;sslkey=certs/client.maxroach.key;sslcert=certs/client.maxroach.crt',
- 'maxroach', null, array(
- PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
- PDO::ATTR_EMULATE_PREPARES => true,
- ));
-
- transferMoney($dbh, 1, 2, 10);
-
- print "Account balances after transfer:\r\n";
- foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) {
- print $row['id'] . ': ' . $row['balance'] . "\r\n";
- }
-} catch (Exception $e) {
- print $e->getMessage() . "\r\n";
- exit(1);
-}
-?>
diff --git a/src/current/_includes/v2.1/app/txn-sample.py b/src/current/_includes/v2.1/app/txn-sample.py
deleted file mode 100644
index d4c86a36cc8..00000000000
--- a/src/current/_includes/v2.1/app/txn-sample.py
+++ /dev/null
@@ -1,76 +0,0 @@
-# Import the driver.
-import psycopg2
-import psycopg2.errorcodes
-
-# Connect to the cluster.
-conn = psycopg2.connect(
- database='bank',
- user='maxroach',
- sslmode='require',
- sslrootcert='certs/ca.crt',
- sslkey='certs/client.maxroach.key',
- sslcert='certs/client.maxroach.crt',
- port=26257,
- host='localhost'
-)
-
-def onestmt(conn, sql):
- with conn.cursor() as cur:
- cur.execute(sql)
-
-
-# Wrapper for a transaction.
-# This automatically re-calls "op" with the open transaction as an argument
-# as long as the database server asks for the transaction to be retried.
-def run_transaction(conn, op):
- with conn:
- onestmt(conn, "SAVEPOINT cockroach_restart")
- while True:
- try:
- # Attempt the work.
- op(conn)
-
- # If we reach this point, commit.
- onestmt(conn, "RELEASE SAVEPOINT cockroach_restart")
- break
-
- except psycopg2.OperationalError as e:
- if e.pgcode != psycopg2.errorcodes.SERIALIZATION_FAILURE:
- # A non-retryable error; report this up the call stack.
- raise e
- # Signal the database that we'll retry.
- onestmt(conn, "ROLLBACK TO SAVEPOINT cockroach_restart")
-
-
-# The transaction we want to run.
-def transfer_funds(txn, frm, to, amount):
- with txn.cursor() as cur:
-
- # Check the current balance.
- cur.execute("SELECT balance FROM accounts WHERE id = " + str(frm))
- from_balance = cur.fetchone()[0]
- if from_balance < amount:
- raise "Insufficient funds"
-
- # Perform the transfer.
- cur.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s",
- (amount, frm))
- cur.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s",
- (amount, to))
-
-
-# Execute the transaction.
-run_transaction(conn, lambda conn: transfer_funds(conn, 1, 2, 100))
-
-
-with conn:
- with conn.cursor() as cur:
- # Check account balances.
- cur.execute("SELECT id, balance FROM accounts")
- rows = cur.fetchall()
- print('Balances after transfer:')
- for row in rows:
- print([str(cell) for cell in row])
-
-# Close communication with the database.
-conn.close()
diff --git a/src/current/_includes/v2.1/app/txn-sample.rb b/src/current/_includes/v2.1/app/txn-sample.rb
deleted file mode 100644
index 1c3e028fdf7..00000000000
--- a/src/current/_includes/v2.1/app/txn-sample.rb
+++ /dev/null
@@ -1,52 +0,0 @@
-# Import the driver.
-require 'pg'
-
-# Wrapper for a transaction.
-# This automatically re-calls "op" with the open transaction as an argument
-# as long as the database server asks for the transaction to be retried.
-def run_transaction(conn)
- conn.transaction do |txn|
- txn.exec('SAVEPOINT cockroach_restart')
- while
- begin
- # Attempt the work.
- yield txn
-
- # If we reach this point, commit.
- txn.exec('RELEASE SAVEPOINT cockroach_restart')
- break
- rescue PG::TRSerializationFailure
- txn.exec('ROLLBACK TO SAVEPOINT cockroach_restart')
- end
- end
- end
-end
-
-def transfer_funds(txn, from, to, amount)
- txn.exec_params('SELECT balance FROM accounts WHERE id = $1', [from]) do |res|
- res.each do |row|
- raise 'insufficient funds' if Integer(row['balance']) < amount
- end
- end
- txn.exec_params('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from])
- txn.exec_params('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to])
-end
-
-# Connect to the "bank" database.
-conn = PG.connect(
- user: 'maxroach',
- dbname: 'bank',
- host: 'localhost',
- port: 26257,
- sslmode: 'require',
- sslrootcert: 'certs/ca.crt',
- sslkey:'certs/client.maxroach.key',
- sslcert:'certs/client.maxroach.crt'
-)
-
-run_transaction(conn) do |txn|
- transfer_funds(txn, 1, 2, 100)
-end
-
-# Close communication with the database.
-conn.close()
diff --git a/src/current/_includes/v2.1/app/txn-sample.rs b/src/current/_includes/v2.1/app/txn-sample.rs
deleted file mode 100644
index c8e099b89e6..00000000000
--- a/src/current/_includes/v2.1/app/txn-sample.rs
+++ /dev/null
@@ -1,73 +0,0 @@
-use openssl::error::ErrorStack;
-use openssl::ssl::{SslConnector, SslFiletype, SslMethod};
-use postgres::{error::SqlState, Client, Error, Transaction};
-use postgres_openssl::MakeTlsConnector;
-
-/// Runs op inside a transaction and retries it as needed.
-/// On non-retryable failures, the transaction is aborted and
-/// rolled back; on success, the transaction is committed.
-fn execute_txn(client: &mut Client, op: F) -> Result
-where
- F: Fn(&mut Transaction) -> Result,
-{
- let mut txn = client.transaction()?;
- loop {
- let mut sp = txn.savepoint("cockroach_restart")?;
- match op(&mut sp).and_then(|t| sp.commit().map(|_| t)) {
- Err(ref err)
- if err
- .code()
- .map(|e| *e == SqlState::T_R_SERIALIZATION_FAILURE)
- .unwrap_or(false) => {}
- r => break r,
- }
- }
- .and_then(|t| txn.commit().map(|_| t))
-}
-
-fn transfer_funds(txn: &mut Transaction, from: i64, to: i64, amount: i64) -> Result<(), Error> {
- // Read the balance.
- let from_balance: i64 = txn
- .query_one("SELECT balance FROM accounts WHERE id = $1", &[&from])?
- .get(0);
-
- assert!(from_balance >= amount);
-
- // Perform the transfer.
- txn.execute(
- "UPDATE accounts SET balance = balance - $1 WHERE id = $2",
- &[&amount, &from],
- )?;
- txn.execute(
- "UPDATE accounts SET balance = balance + $1 WHERE id = $2",
- &[&amount, &to],
- )?;
- Ok(())
-}
-
-fn ssl_config() -> Result {
- let mut builder = SslConnector::builder(SslMethod::tls())?;
- builder.set_ca_file("certs/ca.crt")?;
- builder.set_certificate_chain_file("certs/client.maxroach.crt")?;
- builder.set_private_key_file("certs/client.maxroach.key", SslFiletype::PEM)?;
- Ok(MakeTlsConnector::new(builder.build()))
-}
-
-fn main() {
- let connector = ssl_config().unwrap();
- let mut client =
- Client::connect("postgresql://maxroach@localhost:26257/bank", connector).unwrap();
-
- // Run a transfer in a transaction.
- execute_txn(&mut client, |txn| transfer_funds(txn, 1, 2, 100)).unwrap();
-
- // Check account balances after the transaction.
- for row in &client
- .query("SELECT id, balance FROM accounts", &[])
- .unwrap()
- {
- let id: i64 = row.get(0);
- let balance: i64 = row.get(1);
- println!("{} {}", id, balance);
- }
-}
diff --git a/src/current/_includes/v2.1/app/util.clj b/src/current/_includes/v2.1/app/util.clj
deleted file mode 100644
index d040affe794..00000000000
--- a/src/current/_includes/v2.1/app/util.clj
+++ /dev/null
@@ -1,38 +0,0 @@
-(ns test.util
- (:require [clojure.java.jdbc :as j]
- [clojure.walk :as walk]))
-
-(defn txn-restart-err?
- "Takes an exception and returns true if it is a CockroachDB retry error."
- [e]
- (when-let [m (.getMessage e)]
- (condp instance? e
- java.sql.BatchUpdateException
- (and (re-find #"getNextExc" m)
- (txn-restart-err? (.getNextException e)))
-
- org.postgresql.util.PSQLException
- (= (.getSQLState e) "40001") ; 40001 is the code returned by CockroachDB retry errors.
-
- false)))
-
-;; Wrapper for a transaction.
-;; This automatically invokes the body again as long as the database server
-;; asks the transaction to be retried.
-
-(defmacro with-txn-retry
- "Wrap an evaluation within a CockroachDB retry block."
- [[txn c] & body]
- `(j/with-db-transaction [~txn ~c]
- (loop []
- (j/execute! ~txn ["savepoint cockroach_restart"])
- (let [res# (try (let [r# (do ~@body)]
- {:ok r#})
- (catch java.sql.SQLException e#
- (if (txn-restart-err? e#)
- {:retry true}
- (throw e#))))]
- (if (:retry res#)
- (do (j/execute! ~txn ["rollback to savepoint cockroach_restart"])
- (recur))
- (:ok res#))))))
diff --git a/src/current/_includes/v2.1/client-transaction-retry.md b/src/current/_includes/v2.1/client-transaction-retry.md
deleted file mode 100644
index 6a54534169e..00000000000
--- a/src/current/_includes/v2.1/client-transaction-retry.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-With the default `SERIALIZABLE` [isolation level](transactions.html#isolation-levels), CockroachDB may require the client to [retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a [generic retry function](transactions.html#client-side-intervention) that runs inside a transaction and retries it as needed. The code sample below shows how it is used.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.1/computed-columns/add-computed-column.md b/src/current/_includes/v2.1/computed-columns/add-computed-column.md
deleted file mode 100644
index c670b1c7285..00000000000
--- a/src/current/_includes/v2.1/computed-columns/add-computed-column.md
+++ /dev/null
@@ -1,55 +0,0 @@
-In this example, create a table:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE x (
- a INT NULL,
- b INT NULL AS (a * 2) STORED,
- c INT NULL AS (a + 4) STORED,
- FAMILY "primary" (a, b, rowid, c)
- );
-~~~
-
-Then, insert a row of data:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO x VALUES (6);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM x;
-~~~
-
-~~~
-+---+----+----+
-| a | b | c |
-+---+----+----+
-| 6 | 12 | 10 |
-+---+----+----+
-(1 row)
-~~~
-
-Now add another computed column to the table:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE x ADD COLUMN d INT AS (a // 2) STORED;
-~~~
-
-The `d` column is added to the table and computed from the `a` column divided by 2.
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM x;
-~~~
-
-~~~
-+---+----+----+---+
-| a | b | c | d |
-+---+----+----+---+
-| 6 | 12 | 10 | 3 |
-+---+----+----+---+
-(1 row)
-~~~
diff --git a/src/current/_includes/v2.1/computed-columns/convert-computed-column.md b/src/current/_includes/v2.1/computed-columns/convert-computed-column.md
deleted file mode 100644
index 12fd6e7d418..00000000000
--- a/src/current/_includes/v2.1/computed-columns/convert-computed-column.md
+++ /dev/null
@@ -1,108 +0,0 @@
-You can convert a stored, computed column into a regular column by using `ALTER TABLE`.
-
-In this example, create a simple table with a computed column:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE office_dogs (
- id INT PRIMARY KEY,
- first_name STRING,
- last_name STRING,
- full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED
- );
-~~~
-
-Then, insert a few rows of data:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO office_dogs (id, first_name, last_name) VALUES
- (1, 'Petee', 'Hirata'),
- (2, 'Carl', 'Kimball'),
- (3, 'Ernie', 'Narayan');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM office_dogs;
-~~~
-
-~~~
-+----+------------+-----------+---------------+
-| id | first_name | last_name | full_name |
-+----+------------+-----------+---------------+
-| 1 | Petee | Hirata | Petee Hirata |
-| 2 | Carl | Kimball | Carl Kimball |
-| 3 | Ernie | Narayan | Ernie Narayan |
-+----+------------+-----------+---------------+
-(3 rows)
-~~~
-
-The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html). You can view the column details with the [`SHOW COLUMNS`](show-columns.html) statement:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW COLUMNS FROM office_dogs;
-~~~
-
-~~~
-+-------------+-----------+-------------+----------------+------------------------------------+-------------+
-| column_name | data_type | is_nullable | column_default | generation_expression | indices |
-+-------------+-----------+-------------+----------------+------------------------------------+-------------+
-| id | INT | false | NULL | | {"primary"} |
-| first_name | STRING | true | NULL | | {} |
-| last_name | STRING | true | NULL | | {} |
-| full_name | STRING | true | NULL | concat(first_name, ' ', last_name) | {} |
-+-------------+-----------+-------------+----------------+------------------------------------+-------------+
-(4 rows)
-~~~
-
-Now, convert the computed column (`full_name`) to a regular column:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE office_dogs ALTER COLUMN full_name DROP STORED;
-~~~
-
-Check that the computed column was converted:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW COLUMNS FROM office_dogs;
-~~~
-
-~~~
-+-------------+-----------+-------------+----------------+-----------------------+-------------+
-| column_name | data_type | is_nullable | column_default | generation_expression | indices |
-+-------------+-----------+-------------+----------------+-----------------------+-------------+
-| id | INT | false | NULL | | {"primary"} |
-| first_name | STRING | true | NULL | | {} |
-| last_name | STRING | true | NULL | | {} |
-| full_name | STRING | true | NULL | | {} |
-+-------------+-----------+-------------+----------------+-----------------------+-------------+
-(4 rows)
-~~~
-
-The computed column is now a regular column and can be updated as such:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO office_dogs (id, first_name, last_name, full_name) VALUES (4, 'Lola', 'McDog', 'This is not computed');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM office_dogs;
-~~~
-
-~~~
-+----+------------+-----------+----------------------+
-| id | first_name | last_name | full_name |
-+----+------------+-----------+----------------------+
-| 1 | Petee | Hirata | Petee Hirata |
-| 2 | Carl | Kimball | Carl Kimball |
-| 3 | Ernie | Narayan | Ernie Narayan |
-| 4 | Lola | McDog | This is not computed |
-+----+------------+-----------+----------------------+
-(4 rows)
-~~~
diff --git a/src/current/_includes/v2.1/computed-columns/jsonb.md b/src/current/_includes/v2.1/computed-columns/jsonb.md
deleted file mode 100644
index 76a5b08ad8a..00000000000
--- a/src/current/_includes/v2.1/computed-columns/jsonb.md
+++ /dev/null
@@ -1,35 +0,0 @@
-In this example, create a table with a `JSONB` column and a computed column:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE student_profiles (
- id STRING PRIMARY KEY AS (profile->>'id') STORED,
- profile JSONB
-);
-~~~
-
-Then, insert a few rows of data:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO student_profiles (profile) VALUES
- ('{"id": "d78236", "name": "Arthur Read", "age": "16", "school": "PVPHS", "credits": 120, "sports": "none"}'),
- ('{"name": "Buster Bunny", "age": "15", "id": "f98112", "school": "THS", "credits": 67, "clubs": "MUN"}'),
- ('{"name": "Ernie Narayan", "school" : "Brooklyn Tech", "id": "t63512", "sports": "Track and Field", "clubs": "Chess"}');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM student_profiles;
-~~~
-~~~
-+--------+---------------------------------------------------------------------------------------------------------------------+
-| id | profile |
-+--------+---------------------------------------------------------------------------------------------------------------------+
-| d78236 | {"age": "16", "credits": 120, "id": "d78236", "name": "Arthur Read", "school": "PVPHS", "sports": "none"} |
-| f98112 | {"age": "15", "clubs": "MUN", "credits": 67, "id": "f98112", "name": "Buster Bunny", "school": "THS"} |
-| t63512 | {"clubs": "Chess", "id": "t63512", "name": "Ernie Narayan", "school": "Brooklyn Tech", "sports": "Track and Field"} |
-+--------+---------------------------------------------------------------------------------------------------------------------+
-~~~
-
-The primary key `id` is computed as a field from the `profile` column.
diff --git a/src/current/_includes/v2.1/computed-columns/partitioning.md b/src/current/_includes/v2.1/computed-columns/partitioning.md
deleted file mode 100644
index 926c45793b4..00000000000
--- a/src/current/_includes/v2.1/computed-columns/partitioning.md
+++ /dev/null
@@ -1,53 +0,0 @@
-{{site.data.alerts.callout_info}}Partioning is an enterprise feature. To request and enable a trial or full enterprise license, see Enterprise Licensing.{{site.data.alerts.end}}
-
-In this example, create a table with geo-partitioning and a computed column:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE user_locations (
- locality STRING AS (CASE
- WHEN country IN ('ca', 'mx', 'us') THEN 'north_america'
- WHEN country IN ('au', 'nz') THEN 'australia'
- END) STORED,
- id SERIAL,
- name STRING,
- country STRING,
- PRIMARY KEY (locality, id))
- PARTITION BY LIST (locality)
- (PARTITION north_america VALUES IN ('north_america'),
- PARTITION australia VALUES IN ('australia'));
-~~~
-
-Then, insert a few rows of data:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO user_locations (name, country) VALUES
- ('Leonard McCoy', 'us'),
- ('Uhura', 'nz'),
- ('Spock', 'ca'),
- ('James Kirk', 'us'),
- ('Scotty', 'mx'),
- ('Hikaru Sulu', 'us'),
- ('Pavel Chekov', 'au');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM user_locations;
-~~~
-~~~
-+---------------+--------------------+---------------+---------+
-| locality | id | name | country |
-+---------------+--------------------+---------------+---------+
-| australia | 333153890100609025 | Uhura | nz |
-| australia | 333153890100772865 | Pavel Chekov | au |
-| north_america | 333153890100576257 | Leonard McCoy | us |
-| north_america | 333153890100641793 | Spock | ca |
-| north_america | 333153890100674561 | James Kirk | us |
-| north_america | 333153890100707329 | Scotty | mx |
-| north_america | 333153890100740097 | Hikaru Sulu | us |
-+---------------+--------------------+---------------+---------+
-~~~
-
-The `locality` column is computed from the `country` column.
diff --git a/src/current/_includes/v2.1/computed-columns/secondary-index.md b/src/current/_includes/v2.1/computed-columns/secondary-index.md
deleted file mode 100644
index e274db59d7e..00000000000
--- a/src/current/_includes/v2.1/computed-columns/secondary-index.md
+++ /dev/null
@@ -1,63 +0,0 @@
-In this example, create a table with a computed columns and an index on that column:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE gymnastics (
- id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
- athlete STRING,
- vault DECIMAL,
- bars DECIMAL,
- beam DECIMAL,
- floor DECIMAL,
- combined_score DECIMAL AS (vault + bars + beam + floor) STORED,
- INDEX total (combined_score DESC)
- );
-~~~
-
-Then, insert a few rows a data:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO gymnastics (athlete, vault, bars, beam, floor) VALUES
- ('Simone Biles', 15.933, 14.800, 15.300, 15.800),
- ('Gabby Douglas', 0, 15.766, 0, 0),
- ('Laurie Hernandez', 15.100, 0, 15.233, 14.833),
- ('Madison Kocian', 0, 15.933, 0, 0),
- ('Aly Raisman', 15.833, 0, 15.000, 15.366);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM gymnastics;
-~~~
-~~~
-+--------------------------------------+------------------+--------+--------+--------+--------+----------------+
-| id | athlete | vault | bars | beam | floor | combined_score |
-+--------------------------------------+------------------+--------+--------+--------+--------+----------------+
-| 3fe11371-6a6a-49de-bbef-a8dd16560fac | Aly Raisman | 15.833 | 0 | 15.000 | 15.366 | 46.199 |
-| 56055a70-b4c7-4522-909b-8f3674b705e5 | Madison Kocian | 0 | 15.933 | 0 | 0 | 15.933 |
-| 69f73fd1-da34-48bf-aff8-71296ce4c2c7 | Gabby Douglas | 0 | 15.766 | 0 | 0 | 15.766 |
-| 8a7b730b-668d-4845-8d25-48bda25114d6 | Laurie Hernandez | 15.100 | 0 | 15.233 | 14.833 | 45.166 |
-| b2c5ca80-21c2-4853-9178-b96ce220ea4d | Simone Biles | 15.933 | 14.800 | 15.300 | 15.800 | 61.833 |
-+--------------------------------------+------------------+--------+--------+--------+--------+----------------+
-~~~
-
-Now, run a query using the secondary index:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT athlete, combined_score FROM gymnastics ORDER BY combined_score DESC;
-~~~
-~~~
-+------------------+----------------+
-| athlete | combined_score |
-+------------------+----------------+
-| Simone Biles | 61.833 |
-| Aly Raisman | 46.199 |
-| Laurie Hernandez | 45.166 |
-| Madison Kocian | 15.933 |
-| Gabby Douglas | 15.766 |
-+------------------+----------------+
-~~~
-
-The athlete with the highest combined score of 61.833 is Simone Biles.
diff --git a/src/current/_includes/v2.1/computed-columns/simple.md b/src/current/_includes/v2.1/computed-columns/simple.md
deleted file mode 100644
index d2bf9c16969..00000000000
--- a/src/current/_includes/v2.1/computed-columns/simple.md
+++ /dev/null
@@ -1,37 +0,0 @@
-In this example, let's create a simple table with a computed column:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE names (
- id INT PRIMARY KEY,
- first_name STRING,
- last_name STRING,
- full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED
- );
-~~~
-
-Then, insert a few rows of data:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO names (id, first_name, last_name) VALUES
- (1, 'Lola', 'McDog'),
- (2, 'Carl', 'Kimball'),
- (3, 'Ernie', 'Narayan');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM names;
-~~~
-~~~
-+----+------------+-------------+----------------+
-| id | first_name | last_name | full_name |
-+----+------------+-------------+----------------+
-| 1 | Lola | McDog | Lola McDog |
-| 2 | Carl | Kimball | Carl Kimball |
-| 3 | Ernie | Narayan | Ernie Narayan |
-+----+------------+-------------+----------------+
-~~~
-
-The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html).
diff --git a/src/current/_includes/v2.1/faq/auto-generate-unique-ids.html b/src/current/_includes/v2.1/faq/auto-generate-unique-ids.html
deleted file mode 100644
index 419bc80ac65..00000000000
--- a/src/current/_includes/v2.1/faq/auto-generate-unique-ids.html
+++ /dev/null
@@ -1,87 +0,0 @@
-To auto-generate unique row IDs, use the [`UUID`](uuid.html) column with the `gen_random_uuid()` [function](functions-and-operators.html#id-generation-functions) as the [default value](default-value.html):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE t1 (id UUID PRIMARY KEY DEFAULT gen_random_uuid(), name STRING);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO t1 (name) VALUES ('a'), ('b'), ('c');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM t1;
-~~~
-
-~~~
-+--------------------------------------+------+
-| id | name |
-+--------------------------------------+------+
-| 60853a85-681d-4620-9677-946bbfdc8fbc | c |
-| 77c9bc2e-76a5-4ebc-80c3-7ad3159466a1 | b |
-| bd3a56e1-c75e-476c-b221-0da9d74d66eb | a |
-+--------------------------------------+------+
-(3 rows)
-~~~
-
-Alternatively, you can use the [`BYTES`](bytes.html) column with the `uuid_v4()` function as the default value instead:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE t2 (id BYTES PRIMARY KEY DEFAULT uuid_v4(), name STRING);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO t2 (name) VALUES ('a'), ('b'), ('c');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM t2;
-~~~
-
-~~~
-+---------------------------------------------------+------+
-| id | name |
-+---------------------------------------------------+------+
-| "\x9b\x10\xdc\x11\x9a\x9cGB\xbd\x8d\t\x8c\xf6@vP" | a |
-| "\xd9s\xd7\x13\n_L*\xb0\x87c\xb6d\xe1\xd8@" | c |
-| "\uac74\x1dd@B\x97\xac\x04N&\x9eBg\x86" | b |
-+---------------------------------------------------+------+
-(3 rows)
-~~~
-
-In either case, generated IDs will be 128-bit, large enough for there to be virtually no chance of generating non-unique values. Also, once the table grows beyond a single key-value range (more than 64MB by default), new IDs will be scattered across all of the table's ranges and, therefore, likely across different nodes. This means that multiple nodes will share in the load.
-
-If it is important for generated IDs to be stored in the same key-value range, you can use an [integer type](int.html) with the `unique_rowid()` [function](functions-and-operators.html#id-generation-functions) as the default value, either explicitly or via the [`SERIAL` pseudo-type](serial.html):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE t3 (id INT PRIMARY KEY DEFAULT unique_rowid(), name STRING);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO t3 (name) VALUES ('a'), ('b'), ('c');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM t3;
-~~~
-
-~~~
-+--------------------+------+
-| id | name |
-+--------------------+------+
-| 293807573840855041 | a |
-| 293807573840887809 | b |
-| 293807573840920577 | c |
-+--------------------+------+
-(3 rows)
-~~~
-
-Upon insert or upsert, the `unique_rowid()` function generates a default value from the timestamp and ID of the node executing the insert. Such time-ordered values are likely to be globally unique except in cases where a very large number of IDs (100,000+) are generated per node per second. Also, there can be gaps and the order is not completely guaranteed.
diff --git a/src/current/_includes/v2.1/faq/clock-synchronization-effects.md b/src/current/_includes/v2.1/faq/clock-synchronization-effects.md
deleted file mode 100644
index ab4769e842a..00000000000
--- a/src/current/_includes/v2.1/faq/clock-synchronization-effects.md
+++ /dev/null
@@ -1,29 +0,0 @@
-CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. While [serializable consistency](https://en.wikipedia.org/wiki/Serializability) is maintained regardless of clock skew, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node.
-
-The one rare case to note is when a node's clock suddenly jumps beyond the maximum offset before the node detects it. Although extremely unlikely, this could occur, for example, when running CockroachDB inside a VM and the VM hypervisor decides to migrate the VM to different hardware with a different time. In this case, there can be a small window of time between when the node's clock becomes unsynchronized and when the node spontaneously shuts down. During this window, it would be possible for a client to read stale data and write data derived from stale reads. To protect against this, we recommend using the `server.clock.forward_jump_check_enabled` and `server.clock.persist_upper_bound_interval` [cluster settings](cluster-settings.html).
-
-### Considerations
-
-There are important considerations when setting up clock synchronization:
-
-- We recommend using [Google Public NTP](https://developers.google.com/time/) or [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html) with the clock sync service you are already using (e.g., [`ntpd`](http://doc.ntp.org/), [`chrony`](https://chrony.tuxfamily.org/index.html)). For example, if you are already using `ntpd`, configure `ntpd` to point to the Google or Amazon time server.
-
- {{site.data.alerts.callout_info}}
- Amazon Time Sync Service is only available within [Amazon EC2](https://aws.amazon.com/ec2/), so hybrid environments should use Google Public NTP instead.
- {{site.data.alerts.end}}
-
-- If you do not want to use the Google or Amazon time sources, you can use `chrony` and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.
-- Do not mix time sources. It is important to pick one (e.g., Google Public NTP, Amazon Time Sync Service) and use the same for all nodes in the cluster.
-- Do not run more than one clock sync service on VMs where `cockroach` is running.
-
-### Tutorials
-
-For guidance on synchronizing clocks, see the tutorial for your deployment environment:
-
-Environment | Featured Approach
-------------|---------------------
-[On-Premises](deploy-cockroachdb-on-premises.html#step-1-synchronize-clocks) | Use NTP with Google's external NTP service.
-[AWS](deploy-cockroachdb-on-aws.html#step-3-synchronize-clocks) | Use the Amazon Time Sync Service.
-[Azure](deploy-cockroachdb-on-microsoft-azure.html#step-3-synchronize-clocks) | Disable Hyper-V time synchronization and use NTP with Google's external NTP service.
-[Digital Ocean](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks) | Use NTP with Google's external NTP service.
-[GCE](deploy-cockroachdb-on-google-cloud-platform.html#step-3-synchronize-clocks) | Use NTP with Google's internal NTP service.
diff --git a/src/current/_includes/v2.1/faq/clock-synchronization-monitoring.html b/src/current/_includes/v2.1/faq/clock-synchronization-monitoring.html
deleted file mode 100644
index 7fb82e4d188..00000000000
--- a/src/current/_includes/v2.1/faq/clock-synchronization-monitoring.html
+++ /dev/null
@@ -1,8 +0,0 @@
-As explained in more detail [in our monitoring documentation](monitoring-and-alerting.html#prometheus-endpoint), each CockroachDB node exports a wide variety of metrics at `http://:/_status/vars` in the format used by the popular Prometheus timeseries database. Two of these metrics export how close each node's clock is to the clock of all other nodes:
-
-Metric | Definition
--------|-----------
-`clock_offset_meannanos` | The mean difference between the node's clock and other nodes' clocks in nanoseconds
-`clock_offset_stddevnanos` | The standard deviation of the difference between the node's clock and other nodes' clocks in nanoseconds
-
-As described in [the above answer](#what-happens-when-node-clocks-are-not-properly-synchronized), a node will shut down if the mean offset of its clock from the other nodes' clocks exceeds 80% of the maximum offset allowed. It's recommended to monitor the `clock_offset_meannanos` metric and alert if it's approaching the 80% threshold of your cluster's configured max offset.
diff --git a/src/current/_includes/v2.1/faq/differences-between-numberings.md b/src/current/_includes/v2.1/faq/differences-between-numberings.md
deleted file mode 100644
index 741ec4f8066..00000000000
--- a/src/current/_includes/v2.1/faq/differences-between-numberings.md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-| Property | UUID generated with `uuid_v4()` | INT generated with `unique_rowid()` | Sequences |
-|--------------------------------------|-----------------------------------------|-----------------------------------------------|--------------------------------|
-| Size | 16 bytes | 8 bytes | 1 to 8 bytes |
-| Ordering properties | Unordered | Highly time-ordered | Highly time-ordered |
-| Performance cost at generation | Small, scalable | Small, scalable | Variable, can cause contention |
-| Value distribution | Uniformly distributed (128 bits) | Contains time and space (node ID) components | Dense, small values |
-| Data locality | Maximally distributed | Values generated close in time are co-located | Highly local |
-| `INSERT` latency when used as key | Small, insensitive to concurrency | Small, but increases with concurrent INSERTs | Higher |
-| `INSERT` throughput when used as key | Highest | Limited by max throughput on 1 node | Limited by max throughput on 1 node |
-| Read throughput when used as key | Highest (maximal parallelism) | Limited | Limited |
diff --git a/src/current/_includes/v2.1/faq/planned-maintenance.md b/src/current/_includes/v2.1/faq/planned-maintenance.md
deleted file mode 100644
index c9fbb49266a..00000000000
--- a/src/current/_includes/v2.1/faq/planned-maintenance.md
+++ /dev/null
@@ -1,22 +0,0 @@
-By default, if a node stays offline for more than 5 minutes, the cluster will consider it dead and will rebalance its data to other nodes. Before temporarily stopping nodes for planned maintenance (e.g., upgrading system software), if you expect any nodes to be offline for longer than 5 minutes, you can prevent the cluster from unnecessarily rebalancing data off the nodes by increasing the `server.time_until_store_dead` [cluster setting](cluster-settings.html) to match the estimated maintenance window.
-
-For example, let's say you want to maintain a group of servers, and the nodes running on the servers may be offline for up to 15 minutes as a result. Before shutting down the nodes, you would change the `server.time_until_store_dead` cluster setting as follows:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SET CLUSTER SETTING server.time_until_store_dead = '15m0s';
-~~~
-
-After completing the maintenance work and [restarting the nodes](start-a-node.html), you would then change the setting back to its default:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SET CLUSTER SETTING server.time_until_store_dead = '5m0s';
-~~~
-
-It's also important to ensure that load balancers do not send client traffic to a node about to be shut down, even if it will only be down for a few seconds. If you find that your load balancer's health check is not always recognizing a node as unready before the node shuts down, you can increase the `server.shutdown.drain_wait` setting, which tells the node to wait in an unready state for the specified duration. For example:
-
-{% include copy-clipboard.html %}
- ~~~ sql
- > SET CLUSTER SETTING server.shutdown.drain_wait = '10s';
- ~~~
diff --git a/src/current/_includes/v2.1/faq/sequential-numbers.md b/src/current/_includes/v2.1/faq/sequential-numbers.md
deleted file mode 100644
index ee5bd96d9c4..00000000000
--- a/src/current/_includes/v2.1/faq/sequential-numbers.md
+++ /dev/null
@@ -1,7 +0,0 @@
-Sequential numbers can be generated in CockroachDB using the `unique_rowid()` built-in function or using [SQL sequences](create-sequence.html). However, note the following considerations:
-
-- Unless you need roughly-ordered numbers, we recommend using [`UUID`](uuid.html) values instead. See the [previous
-FAQ](#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) for details.
-- [Sequences](create-sequence.html) produce **unique** values. However, not all values are guaranteed to be produced (e.g., when a transaction is canceled after it consumes a value) and the values may be slightly reordered (e.g., when a transaction that
-consumes a lower sequence number commits after a transaction that consumes a higher number).
-- For maximum performance, avoid using sequences or `unique_rowid()` to generate row IDs or indexed columns. Values generated in these ways are logically close to each other and can cause contention on few data ranges during inserts. Instead, prefer [`UUID`](uuid.html) identifiers.
diff --git a/src/current/_includes/v2.1/faq/sequential-transactions.md b/src/current/_includes/v2.1/faq/sequential-transactions.md
deleted file mode 100644
index 684f2ce5d2a..00000000000
--- a/src/current/_includes/v2.1/faq/sequential-transactions.md
+++ /dev/null
@@ -1,19 +0,0 @@
-Most use cases that ask for a strong time-based write ordering can be solved with other, more distribution-friendly
-solutions instead. For example, CockroachDB's [time travel queries (`AS OF SYSTEM
-TIME`)](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/) support the following:
-
-- Paginating through all the changes to a table or dataset
-- Determining the order of changes to data over time
-- Determining the state of data at some point in the past
-- Determining the changes to data between two points of time
-
-Consider also that the values generated by `unique_rowid()`, described in the previous FAQ entries, also provide an approximate time ordering.
-
-However, if your application absolutely requires strong time-based write ordering, it is possible to create a strictly monotonic counter in CockroachDB that increases over time as follows:
-
-- Initially: `CREATE TABLE cnt(val INT PRIMARY KEY); INSERT INTO cnt(val) VALUES(1);`
-- In each transaction: `INSERT INTO cnt(val) SELECT max(val)+1 FROM cnt RETURNING val;`
-
-This will cause [`INSERT`](insert.html) transactions to conflict with each other and effectively force the transactions to commit one at a time throughout the cluster, which in turn guarantees the values generated in this way are strictly increasing over time without gaps. The caveat is that performance is severely limited as a result.
-
-If you find yourself interested in this problem, please [contact us](support-resources.html) and describe your situation. We would be glad to help you find alternative solutions and possibly extend CockroachDB to better match your needs.
diff --git a/src/current/_includes/v2.1/faq/simulate-key-value-store.html b/src/current/_includes/v2.1/faq/simulate-key-value-store.html
deleted file mode 100644
index 4772fa5358c..00000000000
--- a/src/current/_includes/v2.1/faq/simulate-key-value-store.html
+++ /dev/null
@@ -1,13 +0,0 @@
-CockroachDB is a distributed SQL database built on a transactional and strongly-consistent key-value store. Although it is not possible to access the key-value store directly, you can mirror direct access using a "simple" table of two columns, with one set as the primary key:
-
-~~~ sql
-> CREATE TABLE kv (k INT PRIMARY KEY, v BYTES);
-~~~
-
-When such a "simple" table has no indexes or foreign keys, [`INSERT`](insert.html)/[`UPSERT`](upsert.html)/[`UPDATE`](update.html)/[`DELETE`](delete.html) statements translate to key-value operations with minimal overhead (single digit percent slowdowns). For example, the following `UPSERT` to add or replace a row in the table would translate into a single key-value Put operation:
-
-~~~ sql
-> UPSERT INTO kv VALUES (1, b'hello')
-~~~
-
-This SQL table approach also offers you a well-defined query language, a known transaction model, and the flexibility to add more columns to the table if the need arises.
diff --git a/src/current/_includes/v2.1/faq/sql-query-logging.md b/src/current/_includes/v2.1/faq/sql-query-logging.md
deleted file mode 100644
index b84937f9300..00000000000
--- a/src/current/_includes/v2.1/faq/sql-query-logging.md
+++ /dev/null
@@ -1,63 +0,0 @@
-There are several ways to log SQL queries. The type of logging you use will depend on your requirements.
-
-- For per-table audit logs, turn on [SQL audit logs](#sql-audit-logs).
-- For system troubleshooting and performance optimization, turn on [cluster-wide execution logs](#cluster-wide-execution-logs).
-- For local testing, turn on [per-node execution logs](#per-node-execution-logs).
-
-### SQL audit logs
-
-{% include {{ page.version.version }}/misc/experimental-warning.md %}
-
-SQL audit logging is useful if you want to log all queries that are run against specific tables.
-
-- For a tutorial, see [SQL Audit Logging](sql-audit-logging.html).
-
-- For SQL reference documentation, see [`ALTER TABLE ... EXPERIMENTAL_AUDIT`](experimental-audit.html).
-
-### Cluster-wide execution logs
-
-For production clusters, the best way to log all queries is to turn on the [cluster-wide setting](cluster-settings.html) `sql.trace.log_statement_execute`:
-
-~~~ sql
-> SET CLUSTER SETTING sql.trace.log_statement_execute = true;
-~~~
-
-With this setting on, each node of the cluster writes all SQL queries it executes to a separate log file `cockroach-sql-exec.log`. When you no longer need to log queries, you can turn the setting back off:
-
-~~~ sql
-> SET CLUSTER SETTING sql.trace.log_statement_execute = false;
-~~~
-
-### Per-node execution logs
-
-Alternatively, if you are testing CockroachDB locally and want to log queries executed just by a specific node, you can either pass a CLI flag at node startup, or execute a SQL function on a running node.
-
-Using the CLI to start a new node, pass the `--vmodule` flag to the [`cockroach start`](start-a-node.html) command. For example, to start a single node locally and log all SQL queries it executes, you'd run:
-
-~~~ shell
-$ cockroach start --insecure --listen-addr=localhost --vmodule=exec_log=2
-~~~
-
-From the SQL prompt on a running node, execute the `crdb_internal.set_vmodule()` [function](functions-and-operators.html):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT crdb_internal.set_vmodule('exec_log=2');
-~~~
-
-This will result in the following output:
-
-~~~
-+---------------------------+
-| crdb_internal.set_vmodule |
-+---------------------------+
-| 0 |
-+---------------------------+
-(1 row)
-~~~
-
-Once the logging is enabled, all of the node's queries will be written to the [CockroachDB log file](debug-and-error-logs.html) as follows:
-
-~~~
-I180402 19:12:28.112957 394661 sql/exec_log.go:173 [n1,client=127.0.0.1:50155,user=root] exec "psql" {} "SELECT version()" {} 0.795 1 ""
-~~~
diff --git a/src/current/_includes/v2.1/faq/when-to-interleave-tables.html b/src/current/_includes/v2.1/faq/when-to-interleave-tables.html
deleted file mode 100644
index a65196ad693..00000000000
--- a/src/current/_includes/v2.1/faq/when-to-interleave-tables.html
+++ /dev/null
@@ -1,5 +0,0 @@
-You're most likely to benefit from interleaved tables when:
-
- - Your tables form a [hierarchy](interleave-in-parent.html#interleaved-hierarchy)
- - Queries maximize the [benefits of interleaving](interleave-in-parent.html#benefits)
- - Queries do not suffer too greatly from interleaving's [tradeoffs](interleave-in-parent.html#tradeoffs)
diff --git a/src/current/_includes/v2.1/json/json-sample.go b/src/current/_includes/v2.1/json/json-sample.go
deleted file mode 100644
index ecba73acc55..00000000000
--- a/src/current/_includes/v2.1/json/json-sample.go
+++ /dev/null
@@ -1,79 +0,0 @@
-package main
-
-import (
- "database/sql"
- "fmt"
- "io/ioutil"
- "net/http"
- "time"
-
- _ "github.com/lib/pq"
-)
-
-func main() {
- db, err := sql.Open("postgres", "user=maxroach dbname=jsonb_test sslmode=disable port=26257")
- if err != nil {
- panic(err)
- }
-
- // The Reddit API wants us to tell it where to start from. The first request
- // we just say "null" to say "from the start", subsequent requests will use
- // the value received from the last call.
- after := "null"
-
- for i := 0; i < 300; i++ {
- after, err = makeReq(db, after)
- if err != nil {
- panic(err)
- }
- // Reddit limits to 30 requests per minute, so do not do any more than that.
- time.Sleep(2 * time.Second)
- }
-}
-
-func makeReq(db *sql.DB, after string) (string, error) {
- // First, make a request to reddit using the appropriate "after" string.
- client := &http.Client{}
- req, err := http.NewRequest("GET", fmt.Sprintf("https://www.reddit.com/r/programming.json?after=%s", after), nil)
-
- req.Header.Add("User-Agent", `Go`)
-
- resp, err := client.Do(req)
- if err != nil {
- return "", err
- }
-
- res, err := ioutil.ReadAll(resp.Body)
- if err != nil {
- return "", err
- }
-
- // We've gotten back our JSON from reddit, we can use a couple SQL tricks to
- // accomplish multiple things at once.
- // The JSON reddit returns looks like this:
- // {
- // "data": {
- // "children": [ ... ]
- // },
- // "after": ...
- // }
- // We structure our query so that we extract the `children` field, and then
- // expand that and insert each individual element into the database as a
- // separate row. We then return the "after" field so we know how to make the
- // next request.
- r, err := db.Query(`
- INSERT INTO jsonb_test.programming (posts)
- SELECT json_array_elements($1->'data'->'children')
- RETURNING $1->'data'->'after'`,
- string(res))
- if err != nil {
- return "", err
- }
-
- // Since we did a RETURNING, we need to grab the result of our query.
- r.Next()
- var newAfter string
- r.Scan(&newAfter)
-
- return newAfter, nil
-}
diff --git a/src/current/_includes/v2.1/json/json-sample.py b/src/current/_includes/v2.1/json/json-sample.py
deleted file mode 100644
index 68b7fd1ef37..00000000000
--- a/src/current/_includes/v2.1/json/json-sample.py
+++ /dev/null
@@ -1,44 +0,0 @@
-import json
-import psycopg2
-import requests
-import time
-
-conn = psycopg2.connect(database="jsonb_test", user="maxroach", host="localhost", port=26257)
-conn.set_session(autocommit=True)
-cur = conn.cursor()
-
-# The Reddit API wants us to tell it where to start from. The first request
-# we just say "null" to say "from the start"; subsequent requests will use
-# the value received from the last call.
-url = "https://www.reddit.com/r/programming.json"
-after = {"after": "null"}
-
-for n in range(300):
- # First, make a request to reddit using the appropriate "after" string.
- req = requests.get(url, params=after, headers={"User-Agent": "Python"})
-
- # Decode the JSON and set "after" for the next request.
- resp = req.json()
- after = {"after": str(resp['data']['after'])}
-
- # Convert the JSON to a string to send to the database.
- data = json.dumps(resp)
-
- # The JSON reddit returns looks like this:
- # {
- # "data": {
- # "children": [ ... ]
- # },
- # "after": ...
- # }
- # We structure our query so that we extract the `children` field, and then
- # expand that and insert each individual element into the database as a
- # separate row.
- cur.execute("""INSERT INTO jsonb_test.programming (posts)
- SELECT json_array_elements(%s->'data'->'children')""", (data,))
-
- # Reddit limits to 30 requests per minute, so do not do any more than that.
- time.sleep(2)
-
-cur.close()
-conn.close()
diff --git a/src/current/_includes/v2.1/known-limitations/adding-stores-to-node.md b/src/current/_includes/v2.1/known-limitations/adding-stores-to-node.md
deleted file mode 100644
index ee4844e2433..00000000000
--- a/src/current/_includes/v2.1/known-limitations/adding-stores-to-node.md
+++ /dev/null
@@ -1,5 +0,0 @@
-After a node has initially joined a cluster, it is not possible to add additional [stores](start-a-node.html#store) to the node. Stopping the node and restarting it with additional stores causes the node to not reconnect to the cluster.
-
-To work around this limitation, [decommission the node](remove-nodes.html), remove its data directory, and then run [`cockroach start`](start-a-node.html) to join the cluster again as a new node.
-
-[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/39415)
diff --git a/src/current/_includes/v2.1/known-limitations/cdc.md b/src/current/_includes/v2.1/known-limitations/cdc.md
deleted file mode 100644
index 8234797cb14..00000000000
--- a/src/current/_includes/v2.1/known-limitations/cdc.md
+++ /dev/null
@@ -1,11 +0,0 @@
-The following are limitations in the v2.1 release and will be addressed in the future:
-
-- The CockroachDB core changefeed is not ready for external testing.
-- Changefeeds only work on tables with a single [column family](column-families.html) (which is the default for new tables).
-- Many DDL queries (including [`TRUNCATE`](truncate.html) and [`DROP TABLE`](drop-table.html)) will cause errors on a changefeed watching the affected tables. You will need to [start a new changefeed](create-changefeed.html#start-a-new-changefeed-where-another-ended).
-- Changefeeds cannot be [backed up](backup.html) or [restored](restore.html).
-- Changefeed backoff/retry behavior during partial or intermittent sink unavailability has not been optimized; however, [ordering guarantees](change-data-capture.html#ordering-guarantees) will still hold for as long as a changefeed [remains active](change-data-capture.html#monitor-a-changefeed).
-- Changefeeds use a pull model, but will use a push model in the future, lowering latencies considerably.
-- Changefeeds cannot be altered. To alter, cancel the changefeed and [create a new one with updated settings from where it left off](create-changefeed.html#start-a-new-changefeed-where-another-ended).
-- Additional envelope options will be added, including one that displays the old and new values for the changed row.
-- Additional target options will be added, including partitions and ranges of primary key rows.
diff --git a/src/current/_includes/v2.1/known-limitations/cte-by-name.md b/src/current/_includes/v2.1/known-limitations/cte-by-name.md
deleted file mode 100644
index d33a6f8c7e8..00000000000
--- a/src/current/_includes/v2.1/known-limitations/cte-by-name.md
+++ /dev/null
@@ -1,10 +0,0 @@
-It is currently not possible to refer to a [common table expression](common-table-expressions.html) by name more than once.
-
-For example, the following query is invalid because the CTE `a` is
-referred to twice:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> WITH a AS (VALUES (1), (2), (3))
- SELECT * FROM a, a;
-~~~
diff --git a/src/current/_includes/v2.1/known-limitations/dump-cyclic-foreign-keys.md b/src/current/_includes/v2.1/known-limitations/dump-cyclic-foreign-keys.md
deleted file mode 100644
index 4e3c43644ea..00000000000
--- a/src/current/_includes/v2.1/known-limitations/dump-cyclic-foreign-keys.md
+++ /dev/null
@@ -1 +0,0 @@
-The [`cockroach dump`](sql-dump.html) command will successfully create a dump file for a table with a [foreign key](foreign-key.html) reference to itself, or a set of tables with a cyclic foreign key dependency (e.g., a depends on b depends on a). That dump file, however, can only be executed after manually editing the output to remove the foreign key definitions from the `CREATE TABLE` statements and adding them as `ALTER TABLE ... ADD CONSTRAINT` statements after the `INSERT` statements.
diff --git a/src/current/_includes/v2.1/known-limitations/import-interleaved-table.md b/src/current/_includes/v2.1/known-limitations/import-interleaved-table.md
deleted file mode 100644
index 2198a72933a..00000000000
--- a/src/current/_includes/v2.1/known-limitations/import-interleaved-table.md
+++ /dev/null
@@ -1 +0,0 @@
-After using [`cockroach dump`](sql-dump.html) to dump the schema and data of an interleaved table, the output must be edited before it can be imported via [`IMPORT`](import.html). See [#35462](https://github.com/cockroachdb/cockroach/issues/35462) for the workaround and more details.
diff --git a/src/current/_includes/v2.1/known-limitations/node-map.md b/src/current/_includes/v2.1/known-limitations/node-map.md
deleted file mode 100644
index 863f09c3ac2..00000000000
--- a/src/current/_includes/v2.1/known-limitations/node-map.md
+++ /dev/null
@@ -1,8 +0,0 @@
-You cannot assign latitude/longitude coordinates to localities if the components of your localities have the same name. For example, consider the following partial configuration:
-
-| Node | Region | Datacenter |
-| ------ | ------ | ------ |
-| Node1 | us-east | datacenter-1 |
-| Node2 | us-west | datacenter-1 |
-
-In this case, if you try to set the latitude/longitude coordinates to the datacenter level of the localities, you will get the "primary key exists" error and the **Node Map** will not be displayed. You can, however, set the latitude/longitude coordinates to the region components of the localities, and the **Node Map** will be displayed.
diff --git a/src/current/_includes/v2.1/known-limitations/partitioning-with-placeholders.md b/src/current/_includes/v2.1/known-limitations/partitioning-with-placeholders.md
deleted file mode 100644
index b3c3345200d..00000000000
--- a/src/current/_includes/v2.1/known-limitations/partitioning-with-placeholders.md
+++ /dev/null
@@ -1 +0,0 @@
-When defining a [table partition](partitioning.html), either during table creation or table alteration, it is not possible to use placeholders in the `PARTITION BY` clause.
diff --git a/src/current/_includes/v2.1/known-limitations/system-range-replication.md b/src/current/_includes/v2.1/known-limitations/system-range-replication.md
deleted file mode 100644
index dd0433f7a18..00000000000
--- a/src/current/_includes/v2.1/known-limitations/system-range-replication.md
+++ /dev/null
@@ -1 +0,0 @@
-Changes to the [`.default` cluster-wide replication zone](configure-replication-zones.html#edit-the-default-replication-zone) are not automatically applied to existing replication zones, including pre-configured zones for important system ranges that must remain available for the cluster as a whole to remain available. The zones for these system ranges have an initial replication factor of 5 to make them more resilient to node failure. However, if you increase the `.default` zone's replication factor above 5, consider [increasing the replication factor for important system ranges](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) as well.
diff --git a/src/current/_includes/v2.1/metric-names.md b/src/current/_includes/v2.1/metric-names.md
deleted file mode 100644
index 7eebed323d8..00000000000
--- a/src/current/_includes/v2.1/metric-names.md
+++ /dev/null
@@ -1,246 +0,0 @@
-Name | Help
------|-----
-`addsstable.applications` | Number of SSTable ingestions applied (i.e., applied by Replicas)
-`addsstable.copies` | Number of SSTable ingestions that required copying files during application
-`addsstable.proposals` | Number of SSTable ingestions proposed (i.e., sent to Raft by lease holders)
-`build.timestamp` | Build information
-`capacity.available` | Available storage capacity
-`capacity.reserved` | Capacity reserved for snapshots
-`capacity.used` | Used storage capacity
-`capacity` | Total storage capacity
-`clock-offset.meannanos` | Mean clock offset with other nodes in nanoseconds
-`clock-offset.stddevnanos` | Std dev clock offset with other nodes in nanoseconds
-`compactor.compactingnanos` | Number of nanoseconds spent compacting ranges
-`compactor.compactions.failure` | Number of failed compaction requests sent to the storage engine
-`compactor.compactions.success` | Number of successful compaction requests sent to the storage engine
-`compactor.suggestionbytes.compacted` | Number of logical bytes compacted from suggested compactions
-`compactor.suggestionbytes.queued` | Number of logical bytes in suggested compactions in the queue
-`compactor.suggestionbytes.skipped` | Number of logical bytes in suggested compactions which were not compacted
-`distsender.batches.partial` | Number of partial batches processed
-`distsender.batches` | Number of batches processed
-`distsender.errors.notleaseholder` | Number of NotLeaseHolderErrors encountered
-`distsender.rpc.sent.local` | Number of local RPCs sent
-`distsender.rpc.sent.nextreplicaerror` | Number of RPCs sent due to per-replica errors
-`distsender.rpc.sent` | Number of RPCs sent
-`exec.error` | Number of batch KV requests that failed to execute on this node
-`exec.latency` | Latency in nanoseconds of batch KV requests executed on this node
-`exec.success` | Number of batch KV requests executed successfully on this node
-`gcbytesage` | Cumulative age of non-live data in seconds
-`gossip.bytes.received` | Number of received gossip bytes
-`gossip.bytes.sent` | Number of sent gossip bytes
-`gossip.connections.incoming` | Number of active incoming gossip connections
-`gossip.connections.outgoing` | Number of active outgoing gossip connections
-`gossip.connections.refused` | Number of refused incoming gossip connections
-`gossip.infos.received` | Number of received gossip Info objects
-`gossip.infos.sent` | Number of sent gossip Info objects
-`intentage` | Cumulative age of intents in seconds
-`intentbytes` | Number of bytes in intent KV pairs
-`intentcount` | Count of intent keys
-`keybytes` | Number of bytes taken up by keys
-`keycount` | Count of all keys
-`lastupdatenanos` | Time in nanoseconds since Unix epoch at which bytes/keys/intents metrics were last updated
-`leases.epoch` | Number of replica leaseholders using epoch-based leases
-`leases.error` | Number of failed lease requests
-`leases.expiration` | Number of replica leaseholders using expiration-based leases
-`leases.success` | Number of successful lease requests
-`leases.transfers.error` | Number of failed lease transfers
-`leases.transfers.success` | Number of successful lease transfers
-`livebytes` | Number of bytes of live data (keys plus values)
-`livecount` | Count of live keys
-`liveness.epochincrements` | Number of times this node has incremented its liveness epoch
-`liveness.heartbeatfailures` | Number of failed node liveness heartbeats from this node
-`liveness.heartbeatlatency` | Node liveness heartbeat latency in nanoseconds
-`liveness.heartbeatsuccesses` | Number of successful node liveness heartbeats from this node
-`liveness.livenodes` | Number of live nodes in the cluster (will be 0 if this node is not itself live)
-`node-id` | node ID with labels for advertised RPC and HTTP addresses
-`queue.consistency.pending` | Number of pending replicas in the consistency checker queue
-`queue.consistency.process.failure` | Number of replicas which failed processing in the consistency checker queue
-`queue.consistency.process.success` | Number of replicas successfully processed by the consistency checker queue
-`queue.consistency.processingnanos` | Nanoseconds spent processing replicas in the consistency checker queue
-`queue.gc.info.abortspanconsidered` | Number of AbortSpan entries old enough to be considered for removal
-`queue.gc.info.abortspangcnum` | Number of AbortSpan entries fit for removal
-`queue.gc.info.abortspanscanned` | Number of transactions present in the AbortSpan scanned from the engine
-`queue.gc.info.intentsconsidered` | Number of 'old' intents
-`queue.gc.info.intenttxns` | Number of associated distinct transactions
-`queue.gc.info.numkeysaffected` | Number of keys with GC'able data
-`queue.gc.info.pushtxn` | Number of attempted pushes
-`queue.gc.info.resolvesuccess` | Number of successful intent resolutions
-`queue.gc.info.resolvetotal` | Number of attempted intent resolutions
-`queue.gc.info.transactionspangcaborted` | Number of GC'able entries corresponding to aborted txns
-`queue.gc.info.transactionspangccommitted` | Number of GC'able entries corresponding to committed txns
-`queue.gc.info.transactionspangcpending` | Number of GC'able entries corresponding to pending txns
-`queue.gc.info.transactionspanscanned` | Number of entries in transaction spans scanned from the engine
-`queue.gc.pending` | Number of pending replicas in the GC queue
-`queue.gc.process.failure` | Number of replicas which failed processing in the GC queue
-`queue.gc.process.success` | Number of replicas successfully processed by the GC queue
-`queue.gc.processingnanos` | Nanoseconds spent processing replicas in the GC queue
-`queue.raftlog.pending` | Number of pending replicas in the Raft log queue
-`queue.raftlog.process.failure` | Number of replicas which failed processing in the Raft log queue
-`queue.raftlog.process.success` | Number of replicas successfully processed by the Raft log queue
-`queue.raftlog.processingnanos` | Nanoseconds spent processing replicas in the Raft log queue
-`queue.raftsnapshot.pending` | Number of pending replicas in the Raft repair queue
-`queue.raftsnapshot.process.failure` | Number of replicas which failed processing in the Raft repair queue
-`queue.raftsnapshot.process.success` | Number of replicas successfully processed by the Raft repair queue
-`queue.raftsnapshot.processingnanos` | Nanoseconds spent processing replicas in the Raft repair queue
-`queue.replicagc.pending` | Number of pending replicas in the replica GC queue
-`queue.replicagc.process.failure` | Number of replicas which failed processing in the replica GC queue
-`queue.replicagc.process.success` | Number of replicas successfully processed by the replica GC queue
-`queue.replicagc.processingnanos` | Nanoseconds spent processing replicas in the replica GC queue
-`queue.replicagc.removereplica` | Number of replica removals attempted by the replica gc queue
-`queue.replicate.addreplica` | Number of replica additions attempted by the replicate queue
-`queue.replicate.pending` | Number of pending replicas in the replicate queue
-`queue.replicate.process.failure` | Number of replicas which failed processing in the replicate queue
-`queue.replicate.process.success` | Number of replicas successfully processed by the replicate queue
-`queue.replicate.processingnanos` | Nanoseconds spent processing replicas in the replicate queue
-`queue.replicate.purgatory` | Number of replicas in the replicate queue's purgatory, awaiting allocation options
-`queue.replicate.rebalancereplica` | Number of replica rebalancer-initiated additions attempted by the replicate queue
-`queue.replicate.removedeadreplica` | Number of dead replica removals attempted by the replicate queue (typically in response to a node outage)
-`queue.replicate.removereplica` | Number of replica removals attempted by the replicate queue (typically in response to a rebalancer-initiated addition)
-`queue.replicate.transferlease` | Number of range lease transfers attempted by the replicate queue
-`queue.split.pending` | Number of pending replicas in the split queue
-`queue.split.process.failure` | Number of replicas which failed processing in the split queue
-`queue.split.process.success` | Number of replicas successfully processed by the split queue
-`queue.split.processingnanos` | Nanoseconds spent processing replicas in the split queue
-`queue.tsmaintenance.pending` | Number of pending replicas in the time series maintenance queue
-`queue.tsmaintenance.process.failure` | Number of replicas which failed processing in the time series maintenance queue
-`queue.tsmaintenance.process.success` | Number of replicas successfully processed by the time series maintenance queue
-`queue.tsmaintenance.processingnanos` | Nanoseconds spent processing replicas in the time series maintenance queue
-`raft.commandsapplied` | Count of Raft commands applied
-`raft.enqueued.pending` | Number of pending outgoing messages in the Raft Transport queue
-`raft.heartbeats.pending` | Number of pending heartbeats and responses waiting to be coalesced
-`raft.process.commandcommit.latency` | Latency histogram in nanoseconds for committing Raft commands
-`raft.process.logcommit.latency` | Latency histogram in nanoseconds for committing Raft log entries
-`raft.process.tickingnanos` | Nanoseconds spent in store.processRaft() processing replica.Tick()
-`raft.process.workingnanos` | Nanoseconds spent in store.processRaft() working
-`raft.rcvd.app` | Number of MsgApp messages received by this store
-`raft.rcvd.appresp` | Number of MsgAppResp messages received by this store
-`raft.rcvd.dropped` | Number of dropped incoming Raft messages
-`raft.rcvd.heartbeat` | Number of (coalesced, if enabled) MsgHeartbeat messages received by this store
-`raft.rcvd.heartbeatresp` | Number of (coalesced, if enabled) MsgHeartbeatResp messages received by this store
-`raft.rcvd.prevote` | Number of MsgPreVote messages received by this store
-`raft.rcvd.prevoteresp` | Number of MsgPreVoteResp messages received by this store
-`raft.rcvd.prop` | Number of MsgProp messages received by this store
-`raft.rcvd.snap` | Number of MsgSnap messages received by this store
-`raft.rcvd.timeoutnow` | Number of MsgTimeoutNow messages received by this store
-`raft.rcvd.transferleader` | Number of MsgTransferLeader messages received by this store
-`raft.rcvd.vote` | Number of MsgVote messages received by this store
-`raft.rcvd.voteresp` | Number of MsgVoteResp messages received by this store
-`raft.ticks` | Number of Raft ticks queued
-`raftlog.behind` | Number of Raft log entries followers on other stores are behind
-`raftlog.truncated` | Number of Raft log entries truncated
-`range.adds` | Number of range additions
-`range.raftleadertransfers` | Number of raft leader transfers
-`range.removes` | Number of range removals
-`range.snapshots.generated` | Number of generated snapshots
-`range.snapshots.normal-applied` | Number of applied snapshots
-`range.snapshots.preemptive-applied` | Number of applied preemptive snapshots
-`range.splits` | Number of range splits
-`ranges.unavailable` | Number of ranges with fewer live replicas than needed for quorum
-`ranges.underreplicated` | Number of ranges with fewer live replicas than the replication target
-`ranges` | Number of ranges
-`rebalancing.writespersecond` | Number of keys written (i.e., applied by raft) per second to the store, averaged over a large time period as used in rebalancing decisions
-`replicas.commandqueue.combinedqueuesize` | Number of commands in all CommandQueues combined
-`replicas.commandqueue.combinedreadcount` | Number of read-only commands in all CommandQueues combined
-`replicas.commandqueue.combinedwritecount` | Number of read-write commands in all CommandQueues combined
-`replicas.commandqueue.maxoverlaps` | Largest number of overlapping commands seen when adding to any CommandQueue
-`replicas.commandqueue.maxreadcount` | Largest number of read-only commands in any CommandQueue
-`replicas.commandqueue.maxsize` | Largest number of commands in any CommandQueue
-`replicas.commandqueue.maxtreesize` | Largest number of intervals in any CommandQueue's interval tree
-`replicas.commandqueue.maxwritecount` | Largest number of read-write commands in any CommandQueue
-`replicas.leaders_not_leaseholders` | Number of replicas that are Raft leaders whose range lease is held by another store
-`replicas.leaders` | Number of raft leaders
-`replicas.leaseholders` | Number of lease holders
-`replicas.quiescent` | Number of quiesced replicas
-`replicas.reserved` | Number of replicas reserved for snapshots
-`replicas` | Number of replicas
-`requests.backpressure.split` | Number of backpressured writes waiting on a Range split
-`requests.slow.commandqueue` | Number of requests that have been stuck for a long time in the command queue
-`requests.slow.distsender` | Number of requests that have been stuck for a long time in the dist sender
-`requests.slow.lease` | Number of requests that have been stuck for a long time acquiring a lease
-`requests.slow.raft` | Number of requests that have been stuck for a long time in raft
-`rocksdb.block.cache.hits` | Count of block cache hits
-`rocksdb.block.cache.misses` | Count of block cache misses
-`rocksdb.block.cache.pinned-usage` | Bytes pinned by the block cache
-`rocksdb.block.cache.usage` | Bytes used by the block cache
-`rocksdb.bloom.filter.prefix.checked` | Number of times the bloom filter was checked
-`rocksdb.bloom.filter.prefix.useful` | Number of times the bloom filter helped avoid iterator creation
-`rocksdb.compactions` | Number of table compactions
-`rocksdb.flushes` | Number of table flushes
-`rocksdb.memtable.total-size` | Current size of memtable in bytes
-`rocksdb.num-sstables` | Number of rocksdb SSTables
-`rocksdb.read-amplification` | Number of disk reads per query
-`rocksdb.table-readers-mem-estimate` | Memory used by index and filter blocks
-`round-trip-latency` | Distribution of round-trip latencies with other nodes in nanoseconds
-`security.certificate.expiration.ca` | Expiration timestamp in seconds since Unix epoch for the CA certificate. 0 means no certificate or error.
-`security.certificate.expiration.node` | Expiration timestamp in seconds since Unix epoch for the node certificate. 0 means no certificate or error.
-`sql.bytesin` | Number of sql bytes received
-`sql.bytesout` | Number of sql bytes sent
-`sql.conns` | Number of active sql connections
-`sql.ddl.count` | Number of SQL DDL statements
-`sql.delete.count` | Number of SQL DELETE statements
-`sql.distsql.exec.latency` | Latency in nanoseconds of DistSQL statement execution
-`sql.distsql.flows.active` | Number of distributed SQL flows currently active
-`sql.distsql.flows.total` | Number of distributed SQL flows executed
-`sql.distsql.queries.active` | Number of distributed SQL queries currently active
-`sql.distsql.queries.total` | Number of distributed SQL queries executed
-`sql.distsql.select.count` | Number of DistSQL SELECT statements
-`sql.distsql.service.latency` | Latency in nanoseconds of DistSQL request execution
-`sql.exec.latency` | Latency in nanoseconds of SQL statement execution
-`sql.insert.count` | Number of SQL INSERT statements
-`sql.mem.current` | Current sql statement memory usage
-`sql.mem.distsql.current` | Current sql statement memory usage for distsql
-`sql.mem.distsql.max` | Memory usage per sql statement for distsql
-`sql.mem.max` | Memory usage per sql statement
-`sql.mem.session.current` | Current sql session memory usage
-`sql.mem.session.max` | Memory usage per sql session
-`sql.mem.txn.current` | Current sql transaction memory usage
-`sql.mem.txn.max` | Memory usage per sql transaction
-`sql.misc.count` | Number of other SQL statements
-`sql.query.count` | Number of SQL queries
-`sql.select.count` | Number of SQL SELECT statements
-`sql.service.latency` | Latency in nanoseconds of SQL request execution
-`sql.txn.abort.count` | Number of SQL transaction ABORT statements
-`sql.txn.begin.count` | Number of SQL transaction BEGIN statements
-`sql.txn.commit.count` | Number of SQL transaction COMMIT statements
-`sql.txn.rollback.count` | Number of SQL transaction ROLLBACK statements
-`sql.update.count` | Number of SQL UPDATE statements
-`sys.cgo.allocbytes` | Current bytes of memory allocated by cgo
-`sys.cgo.totalbytes` | Total bytes of memory allocated by cgo, but not released
-`sys.cgocalls` | Total number of cgo call
-`sys.cpu.sys.ns` | Total system cpu time in nanoseconds
-`sys.cpu.sys.percent` | Current system cpu percentage
-`sys.cpu.user.ns` | Total user cpu time in nanoseconds
-`sys.cpu.user.percent` | Current user cpu percentage
-`sys.fd.open` | Process open file descriptors
-`sys.fd.softlimit` | Process open FD soft limit
-`sys.gc.count` | Total number of GC runs
-`sys.gc.pause.ns` | Total GC pause in nanoseconds
-`sys.gc.pause.percent` | Current GC pause percentage
-`sys.go.allocbytes` | Current bytes of memory allocated by go
-`sys.go.totalbytes` | Total bytes of memory allocated by go, but not released
-`sys.goroutines` | Current number of goroutines
-`sys.rss` | Current process RSS
-`sys.uptime` | Process uptime in seconds
-`sysbytes` | Number of bytes in system KV pairs
-`syscount` | Count of system KV pairs
-`timeseries.write.bytes` | Total size in bytes of metric samples written to disk
-`timeseries.write.errors` | Total errors encountered while attempting to write metrics to disk
-`timeseries.write.samples` | Total number of metric samples written to disk
-`totalbytes` | Total number of bytes taken up by keys and values including non-live data
-`tscache.skl.read.pages` | Number of pages in the read timestamp cache
-`tscache.skl.read.rotations` | Number of page rotations in the read timestamp cache
-`tscache.skl.write.pages` | Number of pages in the write timestamp cache
-`tscache.skl.write.rotations` | Number of page rotations in the write timestamp cache
-`txn.abandons` | Number of abandoned KV transactions
-`txn.aborts` | Number of aborted KV transactions
-`txn.autoretries` | Number of automatic retries to avoid serializable restarts
-`txn.commits1PC` | Number of committed one-phase KV transactions
-`txn.commits` | Number of committed KV transactions (including 1PC)
-`txn.durations` | KV transaction durations in nanoseconds
-`txn.restarts.deleterange` | Number of restarts due to a forwarded commit timestamp and a DeleteRange command
-`txn.restarts.possiblereplay` | Number of restarts due to possible replays of command batches at the storage layer
-`txn.restarts.serializable` | Number of restarts due to a forwarded commit timestamp and isolation=SERIALIZABLE
-`txn.restarts.writetooold` | Number of restarts due to a concurrent writer committing first
-`txn.restarts` | Number of restarted KV transactions
-`valbytes` | Number of bytes taken up by values
-`valcount` | Count of all values
diff --git a/src/current/_includes/v2.1/misc/available-capacity-metric.md b/src/current/_includes/v2.1/misc/available-capacity-metric.md
deleted file mode 100644
index 11511de2d37..00000000000
--- a/src/current/_includes/v2.1/misc/available-capacity-metric.md
+++ /dev/null
@@ -1 +0,0 @@
-If you are running multiple nodes on a single machine (not recommended in production) and didn't specify the maximum allocated storage capacity for each node using the [`--store`](start-a-node.html#store) flag, the capacity metrics in the Admin UI are incorrect. This is because when multiple nodes are running on a single machine, the machine's hard disk is treated as an available store for each node, while in reality, only one hard disk is available for all nodes. The total available capacity is then calculated as the hard disk size multiplied by the number of nodes on the machine.
diff --git a/src/current/_includes/v2.1/misc/aws-locations.md b/src/current/_includes/v2.1/misc/aws-locations.md
deleted file mode 100644
index 8b073c1f230..00000000000
--- a/src/current/_includes/v2.1/misc/aws-locations.md
+++ /dev/null
@@ -1,18 +0,0 @@
-| Location | SQL Statement |
-| ------ | ------ |
-| US East (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east-1', 37.478397, -76.453077)`|
-| US East (Ohio) | `INSERT into system.locations VALUES ('region', 'us-east-2', 40.417287, -76.453077)` |
-| US West (N. California) | `INSERT into system.locations VALUES ('region', 'us-west-1', 38.837522, -120.895824)` |
-| US West (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west-2', 43.804133, -120.554201)` |
-| Canada (Central) | `INSERT into system.locations VALUES ('region', 'ca-central-1', 56.130366, -106.346771)` |
-| EU (Frankfurt) | `INSERT into system.locations VALUES ('region', 'eu-central-1', 50.110922, 8.682127)` |
-| EU (Ireland) | `INSERT into system.locations VALUES ('region', 'eu-west-1', 53.142367, -7.692054)` |
-| EU (London) | `INSERT into system.locations VALUES ('region', 'eu-west-2', 51.507351, -0.127758)` |
-| EU (Paris) | `INSERT into system.locations VALUES ('region', 'eu-west-3', 48.856614, 2.352222)` |
-| Asia Pacific (Tokyo) | `INSERT into system.locations VALUES ('region', 'ap-northeast-1', 35.689487, 139.691706)` |
-| Asia Pacific (Seoul) | `INSERT into system.locations VALUES ('region', 'ap-northeast-2', 37.566535, 126.977969)` |
-| Asia Pacific (Osaka-Local) | `INSERT into system.locations VALUES ('region', 'ap-northeast-3', 34.693738, 135.502165)` |
-| Asia Pacific (Singapore) | `INSERT into system.locations VALUES ('region', 'ap-southeast-1', 1.352083, 103.819836)` |
-| Asia Pacific (Sydney) | `INSERT into system.locations VALUES ('region', 'ap-southeast-2', -33.86882, 151.209296)` |
-| Asia Pacific (Mumbai) | `INSERT into system.locations VALUES ('region', 'ap-south-1', 19.075984, 72.877656)` |
-| South America (São Paulo) | `INSERT into system.locations VALUES ('region', 'sa-east-1', -23.55052, -46.633309)` |
diff --git a/src/current/_includes/v2.1/misc/azure-locations.md b/src/current/_includes/v2.1/misc/azure-locations.md
deleted file mode 100644
index 7119ff8b7cb..00000000000
--- a/src/current/_includes/v2.1/misc/azure-locations.md
+++ /dev/null
@@ -1,30 +0,0 @@
-| Location | SQL Statement |
-| -------- | ------------- |
-| eastasia (East Asia) | `INSERT into system.locations VALUES ('region', 'eastasia', 22.267, 114.188)` |
-| southeastasia (Southeast Asia) | `INSERT into system.locations VALUES ('region', 'southeastasia', 1.283, 103.833)` |
-| centralus (Central US) | `INSERT into system.locations VALUES ('region', 'centralus', 41.5908, -93.6208)` |
-| eastus (East US) | `INSERT into system.locations VALUES ('region', 'eastus', 37.3719, -79.8164)` |
-| eastus2 (East US 2) | `INSERT into system.locations VALUES ('region', 'eastus2', 36.6681, -78.3889)` |
-| westus (West US) | `INSERT into system.locations VALUES ('region', 'westus', 37.783, -122.417)` |
-| northcentralus (North Central US) | `INSERT into system.locations VALUES ('region', 'northcentralus', 41.8819, -87.6278)` |
-| southcentralus (South Central US) | `INSERT into system.locations VALUES ('region', 'southcentralus', 29.4167, -98.5)` |
-| northeurope (North Europe) | `INSERT into system.locations VALUES ('region', 'northeurope', 53.3478, -6.2597)` |
-| westeurope (West Europe) | `INSERT into system.locations VALUES ('region', 'westeurope', 52.3667, 4.9)` |
-| japanwest (Japan West) | `INSERT into system.locations VALUES ('region', 'japanwest', 34.6939, 135.5022)` |
-| japaneast (Japan East) | `INSERT into system.locations VALUES ('region', 'japaneast', 35.68, 139.77)` |
-| brazilsouth (Brazil South) | `INSERT into system.locations VALUES ('region', 'brazilsouth', -23.55, -46.633)` |
-| australiaeast (Australia East) | `INSERT into system.locations VALUES ('region', 'australiaeast', -33.86, 151.2094)` |
-| australiasoutheast (Australia Southeast) | `INSERT into system.locations VALUES ('region', 'australiasoutheast', -37.8136, 144.9631)` |
-| southindia (South India) | `INSERT into system.locations VALUES ('region', 'southindia', 12.9822, 80.1636)` |
-| centralindia (Central India) | `INSERT into system.locations VALUES ('region', 'centralindia', 18.5822, 73.9197)` |
-| westindia (West India) | `INSERT into system.locations VALUES ('region', 'westindia', 19.088, 72.868)` |
-| canadacentral (Canada Central) | `INSERT into system.locations VALUES ('region', 'canadacentral', 43.653, -79.383)` |
-| canadaeast (Canada East) | `INSERT into system.locations VALUES ('region', 'canadaeast', 46.817, -71.217)` |
-| uksouth (UK South) | `INSERT into system.locations VALUES ('region', 'uksouth', 50.941, -0.799)` |
-| ukwest (UK West) | `INSERT into system.locations VALUES ('region', 'ukwest', 53.427, -3.084)` |
-| westcentralus (West Central US) | `INSERT into system.locations VALUES ('region', 'westcentralus', 40.890, -110.234)` |
-| westus2 (West US 2) | `INSERT into system.locations VALUES ('region', 'westus2', 47.233, -119.852)` |
-| koreacentral (Korea Central) | `INSERT into system.locations VALUES ('region', 'koreacentral', 37.5665, 126.9780)` |
-| koreasouth (Korea South) | `INSERT into system.locations VALUES ('region', 'koreasouth', 35.1796, 129.0756)` |
-| francecentral (France Central) | `INSERT into system.locations VALUES ('region', 'francecentral', 46.3772, 2.3730)` |
-| francesouth (France South) | `INSERT into system.locations VALUES ('region', 'francesouth', 43.8345, 2.1972)` |
diff --git a/src/current/_includes/v2.1/misc/basic-terms.md b/src/current/_includes/v2.1/misc/basic-terms.md
deleted file mode 100644
index 8eebde3db17..00000000000
--- a/src/current/_includes/v2.1/misc/basic-terms.md
+++ /dev/null
@@ -1,9 +0,0 @@
-Term | Definition
------|------------
-**Cluster** | Your CockroachDB deployment, which acts as a single logical application.
-**Node** | An individual machine running CockroachDB. Many nodes join together to create your cluster.
-**Range** | CockroachDB stores all user data (tables, indexes, etc.) and almost all system data in a giant sorted map of key-value pairs. This keyspace is divided into "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range.
From a SQL perspective, a table and its secondary indexes initially map to a single range, where each key-value pair in the range represents a single row in the table (also called the primary index because the table is sorted by the primary key) or a single row in a secondary index. As soon as that range reaches 64 MiB in size, it splits into two ranges. This process continues for these new ranges as the table and its indexes continue growing.
-**Replica** | CockroachDB replicates each range (3 times by default) and stores each replica on a different node.
-**Leaseholder** | For each range, one of the replicas holds the "range lease". This replica, referred to as the "leaseholder", is the one that receives and coordinates all read and write requests for the range.
Unlike writes, read requests access the leaseholder and send the results to the client without needing to coordinate with any of the other range replicas. This reduces the network round trips involved and is possible because the leaseholder is guaranteed to be up-to-date due to the fact that all write requests also go to the leaseholder.
-**Raft Leader** | For each range, one of the replicas is the "leader" for write requests. Via the [Raft consensus protocol](replication-layer.html#raft), this replica ensures that a majority of replicas (the leader and enough followers) agree, based on their Raft logs, before committing the write. The Raft leader is almost always the same replica as the leaseholder.
-**Raft Log** | For each range, a time-ordered log of writes to the range that its replicas have agreed on. This log exists on-disk with each replica and is the range's source of truth for consistent replication.
diff --git a/src/current/_includes/v2.1/misc/beta-warning.md b/src/current/_includes/v2.1/misc/beta-warning.md
deleted file mode 100644
index 107fc2bfa4b..00000000000
--- a/src/current/_includes/v2.1/misc/beta-warning.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_danger}}
-**This is a beta feature.** It is currently undergoing continued testing. Please [file a Github issue](file-an-issue.html) with us if you identify a bug.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.1/misc/diagnostics-callout.html b/src/current/_includes/v2.1/misc/diagnostics-callout.html
deleted file mode 100644
index a969a8cf152..00000000000
--- a/src/current/_includes/v2.1/misc/diagnostics-callout.html
+++ /dev/null
@@ -1 +0,0 @@
-{{site.data.alerts.callout_info}}By default, each node of a CockroachDB cluster periodically shares anonymous usage details with Cockroach Labs. For an explanation of the details that get shared and how to opt-out of reporting, see Diagnostics Reporting.{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.1/misc/experimental-warning.md b/src/current/_includes/v2.1/misc/experimental-warning.md
deleted file mode 100644
index c6f3283bc8a..00000000000
--- a/src/current/_includes/v2.1/misc/experimental-warning.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_danger}}
-This is an experimental feature. The interface and output are subject to change.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.1/misc/explore-benefits-see-also.md b/src/current/_includes/v2.1/misc/explore-benefits-see-also.md
deleted file mode 100644
index 0392ed9bb83..00000000000
--- a/src/current/_includes/v2.1/misc/explore-benefits-see-also.md
+++ /dev/null
@@ -1,8 +0,0 @@
-- [Data Replication](demo-data-replication.html)
-- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html)
-- [Automatic Rebalancing](demo-automatic-rebalancing.html)
-- [Serializable Transactions](demo-serializable.html)
-- [Cross-Cloud Migration](demo-automatic-cloud-migration.html)
-- [Follow-the-Workload](demo-follow-the-workload.html)
-- [Orchestration](orchestrate-a-local-cluster-with-kubernetes-insecure.html)
-- [JSON Support](demo-json-support.html)
diff --git a/src/current/_includes/v2.1/misc/external-urls.md b/src/current/_includes/v2.1/misc/external-urls.md
deleted file mode 100644
index b18f5c369a2..00000000000
--- a/src/current/_includes/v2.1/misc/external-urls.md
+++ /dev/null
@@ -1,38 +0,0 @@
-~~~
-[scheme]://[host]/[path]?[parameters]
-~~~
-
-| Location | Scheme | Host | Parameters |
-|-------------------------------------------------------------+-------------+--------------------------------------------------+----------------------------------------------------------------------------|
-| Amazon S3 | `s3` | Bucket name | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN` |
-| Azure | `azure` | N/A (see [Example file URLs](#example-file-urls) | `AZURE_ACCOUNT_KEY`, `AZURE_ACCOUNT_NAME` |
-| Google Cloud [1](#considerations) | `gs` | Bucket name | `AUTH` (optional): can be `default` or `implicit` |
-| HTTP [2](#considerations) | `http` | Remote host | N/A |
-| NFS/Local [3](#considerations) | `nodelocal` | N/A (see [Example file URLs](#example-file-urls) | N/A |
-| S3-compatible services [4](#considerations) | `s3` | Bucket name | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`, `AWS_REGION` [5](#considerations) (optional), `AWS_ENDPOINT` |
-
-{{site.data.alerts.callout_info}}
-The location parameters often contain special characters that need to be URI-encoded. Use Javascript's [encodeURIComponent](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent) function or Go language's [url.QueryEscape](https://golang.org/pkg/net/url/#QueryEscape) function to URI-encode the parameters. Other languages provide similar functions to URI-encode special characters.
-{{site.data.alerts.end}}
-
-
-
-- 1 If the `AUTH` parameter is `implicit`, all GCS connections use Google's [default authentication strategy](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application). If the `AUTH` parameter is `default`, the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html) must be set to the contents of a [service account file](https://cloud.google.com/docs/authentication/production#obtaining_and_providing_service_account_credentials_manually) which will be used during authentication. If the `AUTH` parameter is not specified, the `cloudstorage.gs.default.key` setting will be used if it is non-empty, otherwise the `implicit` behavior is used.
-
-- 2 You can create your own HTTP server with [Caddy or nginx](create-a-file-server.html). A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from HTTPS URLs.
-
-- 3 The file system backup location on the NFS drive is relative to the path specified by the `--external-io-dir` flag set while [starting the node](start-a-node.html). If the flag is set to `disabled`, then imports from local directories and NFS drives are disabled.
-
-- 4 A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from an S3-compatible service.
-
-- 5 The `AWS_REGION` parameter is optional since it is not a required parameter for most S3-compatible services. Specify the parameter only if your S3-compatible service requires it.
-
-#### Example file URLs
-
-| Location | Example |
-|--------------+----------------------------------------------------------------------------------|
-| Amazon S3 | `s3://acme-co/employees.sql?AWS_ACCESS_KEY_ID=123&AWS_SECRET_ACCESS_KEY=456` |
-| Azure | `azure://employees.sql?AZURE_ACCOUNT_KEY=123&AZURE_ACCOUNT_NAME=acme-co` |
-| Google Cloud | `gs://acme-co/employees.sql` |
-| HTTP | `http://localhost:8080/employees.sql` |
-| NFS/Local | `nodelocal:///path/employees` |
diff --git a/src/current/_includes/v2.1/misc/gce-locations.md b/src/current/_includes/v2.1/misc/gce-locations.md
deleted file mode 100644
index 22122aae78d..00000000000
--- a/src/current/_includes/v2.1/misc/gce-locations.md
+++ /dev/null
@@ -1,18 +0,0 @@
-| Location | SQL Statement |
-| ------ | ------ |
-| us-east1 (South Carolina) | `INSERT into system.locations VALUES ('region', 'us-east1', 33.836082, -81.163727)` |
-| us-east4 (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east4', 37.478397, -76.453077)` |
-| us-central1 (Iowa) | `INSERT into system.locations VALUES ('region', 'us-central1', 42.032974, -93.581543)` |
-| us-west1 (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west1', 43.804133, -120.554201)` |
-| northamerica-northeast1 (Montreal) | `INSERT into system.locations VALUES ('region', 'northamerica-northeast1', 56.130366, -106.346771)` |
-| europe-west1 (Belgium) | `INSERT into system.locations VALUES ('region', 'europe-west1', 50.44816, 3.81886)` |
-| europe-west2 (London) | `INSERT into system.locations VALUES ('region', 'europe-west2', 51.507351, -0.127758)` |
-| europe-west3 (Frankfurt) | `INSERT into system.locations VALUES ('region', 'europe-west3', 50.110922, 8.682127)` |
-| europe-west4 (Netherlands) | `INSERT into system.locations VALUES ('region', 'europe-west4', 53.4386, 6.8355)` |
-| europe-west6 (Zürich) | `INSERT into system.locations VALUES ('region', 'europe-west6', 47.3769, 8.5417)` |
-| asia-east1 (Taiwan) | `INSERT into system.locations VALUES ('region', 'asia-east1', 24.0717, 120.5624)` |
-| asia-northeast1 (Tokyo) | `INSERT into system.locations VALUES ('region', 'asia-northeast1', 35.689487, 139.691706)` |
-| asia-southeast1 (Singapore) | `INSERT into system.locations VALUES ('region', 'asia-southeast1', 1.352083, 103.819836)` |
-| australia-southeast1 (Sydney) | `INSERT into system.locations VALUES ('region', 'australia-southeast1', -33.86882, 151.209296)` |
-| asia-south1 (Mumbai) | `INSERT into system.locations VALUES ('region', 'asia-south1', 19.075984, 72.877656)` |
-| southamerica-east1 (São Paulo) | `INSERT into system.locations VALUES ('region', 'southamerica-east1', -23.55052, -46.633309)` |
diff --git a/src/current/_includes/v2.1/misc/haproxy.md b/src/current/_includes/v2.1/misc/haproxy.md
deleted file mode 100644
index 6651e178ee4..00000000000
--- a/src/current/_includes/v2.1/misc/haproxy.md
+++ /dev/null
@@ -1,39 +0,0 @@
-By default, the generated configuration file is called `haproxy.cfg` and looks as follows, with the `server` addresses pre-populated correctly:
-
- ~~~
- global
- maxconn 4096
-
- defaults
- mode tcp
- # Timeout values should be configured for your specific use.
- # See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect
- timeout connect 10s
- timeout client 1m
- timeout server 1m
- # TCP keep-alive on client side. Server already enables them.
- option clitcpka
-
- listen psql
- bind :26257
- mode tcp
- balance roundrobin
- option httpchk GET /health?ready=1
- server cockroach1 :26257 check port 8080
- server cockroach2 :26257 check port 8080
- server cockroach3 :26257 check port 8080
- ~~~
-
- The file is preset with the minimal [configurations](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html) needed to work with your running cluster:
-
- Field | Description
- ------|------------
- `timeout connect` `timeout client` `timeout server` | Timeout values that should be suitable for most deployments.
- `bind` | The port that HAProxy listens on. This is the port clients will connect to and thus needs to be allowed by your network configuration.
This tutorial assumes HAProxy is running on a separate machine from CockroachDB nodes. If you run HAProxy on the same machine as a node (not recommended), you'll need to change this port, as `26257` is likely already being used by the CockroachDB node.
- `balance` | The balancing algorithm. This is set to `roundrobin` to ensure that connections get rotated amongst nodes (connection 1 on node 1, connection 2 on node 2, etc.). Check the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-balance) for details about this and other balancing algorithms.
- `option httpchk` | The HTTP endpoint that HAProxy uses to check node health. [`/health?ready=1`](monitoring-and-alerting.html#health-ready-1) ensures that HAProxy doesn't direct traffic to nodes that are live but not ready to receive requests.
- `server` | For each included node, this field specifies the address the node advertises to other nodes in the cluster, i.e., the addressed pass in the [`--advertise-addr` flag](start-a-node.html#networking) on node startup. Make sure hostnames are resolvable and IP addresses are routable from HAProxy.
-
- {{site.data.alerts.callout_info}}
- For full details on these and other configuration settings, see the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html).
- {{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.1/misc/install-next-steps.html b/src/current/_includes/v2.1/misc/install-next-steps.html
deleted file mode 100644
index 2111bdbed9c..00000000000
--- a/src/current/_includes/v2.1/misc/install-next-steps.html
+++ /dev/null
@@ -1,16 +0,0 @@
-
-
If you're just getting started with CockroachDB:
-
The CockroachDB binary for Linux requires glibc, libncurses, and tzdata, which are found by default on nearly all Linux distributions, with Alpine as the notable exception.
diff --git a/src/current/_includes/v2.1/misc/logging-flags.md b/src/current/_includes/v2.1/misc/logging-flags.md
deleted file mode 100644
index 06af86228ee..00000000000
--- a/src/current/_includes/v2.1/misc/logging-flags.md
+++ /dev/null
@@ -1,9 +0,0 @@
-Flag | Description
------|------------
-`--log-dir` | Enable logging to files and write logs to the specified directory.
Setting `--log-dir` to a blank directory (`--log-dir=`) disables logging to files. Do not use `--log-dir=""`; this creates a new directory named `""` and stores log files in that directory.
-`--log-dir-max-size` | After the log directory reaches the specified size, delete the oldest log file. The flag's argument takes standard file sizes, such as `--log-dir-max-size=1GiB`.
**Default**: 100MiB
-`--log-file-max-size` | After logs reach the specified size, begin writing logs to a new file. The flag's argument takes standard file sizes, such as `--log-file-max-size=2MiB`.
**Default**: 10MiB
-`--log-file-verbosity` | Only writes messages to log files if they are at or above the specified [severity level](debug-and-error-logs.html#severity-levels), such as `--log-file-verbosity=WARNING`. **Requires** logging to files.
**Default**: `INFO`
-`--logtostderr` | Enable logging to `stderr` for messages at or above the specified [severity level](debug-and-error-logs.html#severity-levels), such as `--logtostderr=ERROR`
If you use this flag without specifying the severity level (e.g., `cockroach start --logtostderr`), it prints messages of *all* severities to `stderr`.
Setting `--logtostderr=NONE` disables logging to `stderr`.
-`--no-color` | Do not colorize `stderr`. Possible values: `true` or `false`.
When set to `false`, messages logged to `stderr` are colorized based on [severity level](debug-and-error-logs.html#severity-levels).
**Default:** `false`
-`--sql-audit-dir` | New in v2.0: If non-empty, create a SQL audit log in this directory. By default, SQL audit logs are written in the same directory as the other logs generated by CockroachDB. For more information, see [SQL Audit Logging](sql-audit-logging.html).
diff --git a/src/current/_includes/v2.1/misc/multi-store-nodes.md b/src/current/_includes/v2.1/misc/multi-store-nodes.md
deleted file mode 100644
index 01642597169..00000000000
--- a/src/current/_includes/v2.1/misc/multi-store-nodes.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_danger}}
-In the absence of [special replication constraints](configure-replication-zones.html), CockroachDB rebalances replicas to take advantage of available storage capacity. However, in a 3-node cluster with multiple stores per node, CockroachDB is **not** able to rebalance replicas from one store to another store on the same node because this would temporarily result in the node having multiple replicas of the same range, which is not allowed. This is due to the mechanics of rebalancing, where the cluster first creates a copy of the replica at the target destination before removing the source replica. To allow this type of cross-store rebalancing, the cluster must have 4 or more nodes; this allows the cluster to create a copy of the replica on a node that doesn't already have a replica of the range before removing the source replica and then migrating the new replica to the store with more capacity on the original node.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.1/misc/remove-user-callout.html b/src/current/_includes/v2.1/misc/remove-user-callout.html
deleted file mode 100644
index 925f83d779d..00000000000
--- a/src/current/_includes/v2.1/misc/remove-user-callout.html
+++ /dev/null
@@ -1 +0,0 @@
-Removing a user does not remove that user's privileges. Therefore, to prevent a future user with an identical username from inheriting an old user's privileges, it's important to revoke a user's privileges before or after removing the user.
diff --git a/src/current/_includes/v2.1/misc/schema-change-stmt-note.md b/src/current/_includes/v2.1/misc/schema-change-stmt-note.md
deleted file mode 100644
index b522b658652..00000000000
--- a/src/current/_includes/v2.1/misc/schema-change-stmt-note.md
+++ /dev/null
@@ -1,3 +0,0 @@
-{{site.data.alerts.callout_info}}
-This statement performs a schema change. For more information about how online schema changes work in CockroachDB, see [Online Schema Changes](online-schema-changes.html).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.1/misc/schema-change-view-job.md b/src/current/_includes/v2.1/misc/schema-change-view-job.md
deleted file mode 100644
index 1e9b4a7444e..00000000000
--- a/src/current/_includes/v2.1/misc/schema-change-view-job.md
+++ /dev/null
@@ -1 +0,0 @@
-Whenever you initiate a schema change, CockroachDB registers it as a job, which you can view with [`SHOW JOBS`](show-jobs.html).
diff --git a/src/current/_includes/v2.1/misc/schema-changes-between-prepared-statements.md b/src/current/_includes/v2.1/misc/schema-changes-between-prepared-statements.md
deleted file mode 100644
index 736fe99df61..00000000000
--- a/src/current/_includes/v2.1/misc/schema-changes-between-prepared-statements.md
+++ /dev/null
@@ -1,33 +0,0 @@
-When the schema of a table targeted by a prepared statement changes after the prepared statement is created, future executions of the prepared statement could result in an error. For example, adding a column to a table referenced in a prepared statement with a `SELECT *` clause will result in an error:
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-CREATE TABLE users (id INT PRIMARY KEY);
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-PREPARE prep1 AS SELECT * FROM users;
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-ALTER TABLE users ADD COLUMN name STRING;
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-INSERT INTO users VALUES (1, 'Max Roach');
-~~~
-
-{% include_cached copy-clipboard.html %}
-~~~ sql
-EXECUTE prep1;
-~~~
-
-~~~
-ERROR: cached plan must not change result type
-SQLSTATE: 0A000
-~~~
-
-It's therefore recommended to explicitly list result columns instead of using `SELECT *` in prepared statements, when possible.
diff --git a/src/current/_includes/v2.1/misc/schema-changes-within-transactions.md b/src/current/_includes/v2.1/misc/schema-changes-within-transactions.md
deleted file mode 100644
index 8a2061165cc..00000000000
--- a/src/current/_includes/v2.1/misc/schema-changes-within-transactions.md
+++ /dev/null
@@ -1,5 +0,0 @@
-Within a single [transaction](transactions.html):
-
-- DDL statements cannot be mixed with DML statements. As a workaround, you can split the statements into separate transactions. For more details, [see examples of unsupported statements](online-schema-changes.html#examples-of-statements-that-fail).
-- A [`CREATE TABLE`](create-table.html) statement containing [`FOREIGN KEY`](foreign-key.html) or [`INTERLEAVE`](interleave-in-parent.html) clauses cannot be followed by statements that reference the new table.
-- A table cannot be dropped and then recreated with the same name. This is not possible within a single transaction because `DROP TABLE` does not immediately drop the name of the table. As a workaround, split the [`DROP TABLE`](drop-table.html) and [`CREATE TABLE`](create-table.html) statements into separate transactions.
diff --git a/src/current/_includes/v2.1/orchestration/kubernetes-limitations.md b/src/current/_includes/v2.1/orchestration/kubernetes-limitations.md
deleted file mode 100644
index 00c6c0fdd21..00000000000
--- a/src/current/_includes/v2.1/orchestration/kubernetes-limitations.md
+++ /dev/null
@@ -1,7 +0,0 @@
-#### Kubernetes version
-
-Kubernetes 1.18 or higher is required in order to use our most up-to-date configuration files. Earlier Kubernetes releases do not support some of the options used in our configuration files. If you need to run on an older version of Kubernetes, we have kept around configuration files that work on older Kubernetes releases in the versioned subdirectories of [https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes](https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes) (e.g., [v1.7](https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes/v1.7)).
-
-#### Storage
-
-At this time, orchestrations of CockroachDB with Kubernetes use external persistent volumes that are often replicated by the provider. Because CockroachDB already replicates data automatically, this additional layer of replication is unnecessary and can negatively impact performance. High-performance use cases on a private Kubernetes cluster may want to consider using [local volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local).
diff --git a/src/current/_includes/v2.1/orchestration/kubernetes-prometheus-alertmanager.md b/src/current/_includes/v2.1/orchestration/kubernetes-prometheus-alertmanager.md
deleted file mode 100644
index 3f993d08637..00000000000
--- a/src/current/_includes/v2.1/orchestration/kubernetes-prometheus-alertmanager.md
+++ /dev/null
@@ -1,202 +0,0 @@
-Despite CockroachDB's various [built-in safeguards against failure](high-availability.html), it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention.
-
-### Configure Prometheus
-
-Every node of a CockroachDB cluster exports granular timeseries metrics formatted for easy integration with [Prometheus](https://prometheus.io/), an open source tool for storing, aggregating, and querying timeseries data. This section shows you how to orchestrate Prometheus as part of your Kubernetes cluster and pull these metrics into Prometheus for external monitoring.
-
-This guidance is based on [CoreOS's Prometheus Operator](https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md), which allows a Prometheus instance to be managed using built-in Kubernetes concepts.
-
-{{site.data.alerts.callout_info}}
-If you're on Hosted GKE, before starting, make sure the email address associated with your Google Cloud account is part of the `cluster-admin` RBAC group, as shown in [Step 1. Start Kubernetes](#hosted-gke).
-{{site.data.alerts.end}}
-
-1. From your local workstation, edit the `cockroachdb` service to add the `prometheus: cockroachdb` label:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl label svc cockroachdb prometheus=cockroachdb
- ~~~
-
- ~~~
- service "cockroachdb" labeled
- ~~~
-
- This ensures that there is a prometheus job and monitoring data only for the `cockroachdb` service, not for the `cockroach-public` service.
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl label svc my-release-cockroachdb prometheus=cockroachdb
- ~~~
-
- ~~~
- service "cockroachdb" labeled
- ~~~
-
- This ensures that there is a prometheus job and monitoring data only for the `my-release-cockroachdb` service, not for the `my-release-cockroach-public` service.
-
-
-2. Install [CoreOS's Prometheus Operator](https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.20/bundle.yaml):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.20/bundle.yaml
- ~~~
-
- ~~~
- clusterrolebinding "prometheus-operator" created
- clusterrole "prometheus-operator" created
- serviceaccount "prometheus-operator" created
- deployment "prometheus-operator" created
- ~~~
-
-3. Confirm that the `prometheus-operator` has started:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get deploy prometheus-operator
- ~~~
-
- ~~~
- NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
- prometheus-operator 1 1 1 1 1m
- ~~~
-
-4. Use our [`prometheus.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/prometheus.yaml) file to create the various objects necessary to run a Prometheus instance:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/prometheus.yaml
- ~~~
-
- ~~~
- clusterrole "prometheus" created
- clusterrolebinding "prometheus" created
- servicemonitor "cockroachdb" created
- prometheus "cockroachdb" created
- ~~~
-
-5. Access the Prometheus UI locally and verify that CockroachDB is feeding data into Prometheus:
-
- 1. Port-forward from your local machine to the pod running Prometheus:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl port-forward prometheus-cockroachdb-0 9090
- ~~~
-
- 2. Go to http://localhost:9090 in your browser.
-
- 3. To verify that each CockroachDB node is connected to Prometheus, go to **Status > Targets**. The screen should look like this:
-
-
-
- 4. To verify that data is being collected, go to **Graph**, enter the `sys_uptime` variable in the field, click **Execute**, and then click the **Graph** tab. The screen should like this:
-
-
-
- {{site.data.alerts.callout_success}}
- Prometheus auto-completes CockroachDB time series metrics for you, but if you want to see a full listing, with descriptions, port-forward as described in {% if page.secure == true %}[Access the Admin UI](#step-4-access-the-admin-ui){% else %}[Access the Admin UI](#step-4-access-the-admin-ui){% endif %} and then point your browser to http://localhost:8080/_status/vars.
-
- For more details on using the Prometheus UI, see their [official documentation](https://prometheus.io/docs/introduction/getting_started/).
- {{site.data.alerts.end}}
-
-### Configure Alertmanager
-
-Active monitoring helps you spot problems early, but it is also essential to send notifications when there are events that require investigation or intervention. This section shows you how to use [Alertmanager](https://prometheus.io/docs/alerting/alertmanager/) and CockroachDB's starter [alerting rules](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alert-rules.yaml) to do this.
-
-1. Download our alertmanager-config.yaml configuration file.
-
-2. Edit the `alertmanager-config.yaml` file to [specify the desired receivers for notifications](https://prometheus.io/docs/alerting/configuration/). Initially, the file contains a placeholder web hook.
-
-3. Add this configuration to the Kubernetes cluster as a secret, renaming it to `alertmanager.yaml` and labelling it to make it easier to find:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create secret generic alertmanager-cockroachdb --from-file=alertmanager.yaml=alertmanager-config.yaml
- ~~~
-
- ~~~
- secret "alertmanager-cockroachdb" created
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl label secret alertmanager-cockroachdb app=cockroachdb
- ~~~
-
- ~~~
- secret "alertmanager-cockroachdb" labeled
- ~~~
-
- {{site.data.alerts.callout_danger}}
- The name of the secret, `alertmanager-cockroachdb`, must match the name used in the `altermanager.yaml` file. If they differ, the Alertmanager instance will start without configuration, and nothing will happen.
- {{site.data.alerts.end}}
-
-4. Use our [`alertmanager.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alertmanager.yaml) file to create the various objects necessary to run an Alertmanager instance, including a ClusterIP service so that Prometheus can forward alerts:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alertmanager.yaml
- ~~~
-
- ~~~
- alertmanager "cockroachdb" created
- service "alertmanager-cockroachdb" created
- ~~~
-
-5. Verify that Alertmanager is running:
-
- 1. Port-forward from your local machine to the pod running Alertmanager:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl port-forward alertmanager-cockroachdb-0 9093
- ~~~
-
- 2. Go to http://localhost:9093 in your browser. The screen should look like this:
-
-
-
-6. Ensure that the Alertmanagers are visible to Prometheus by opening http://localhost:9090/status. The screen should look like this:
-
-
-
-7. Add CockroachDB's starter [alerting rules](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alert-rules.yaml):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl apply -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alert-rules.yaml
- ~~~
-
- ~~~
- prometheusrule "prometheus-cockroachdb-rules" created
- ~~~
-
-8. Ensure that the rules are visible to Prometheus by opening http://localhost:9090/rules. The screen should look like this:
-
-
-
-9. Verify that the example alert is firing by opening http://localhost:9090/alerts. The screen should look like this:
-
-
-
-10. To remove the example alert:
-
- 1. Use the `kubectl edit` command to open the rules for editing:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl edit prometheusrules prometheus-cockroachdb-rules
- ~~~
-
- 2. Remove the `dummy.rules` block and save the file:
-
- ~~~
- - name: rules/dummy.rules
- rules:
- - alert: TestAlertManager
- expr: vector(1)
- ~~~
diff --git a/src/current/_includes/v2.1/orchestration/kubernetes-remove-nodes-insecure.md b/src/current/_includes/v2.1/orchestration/kubernetes-remove-nodes-insecure.md
deleted file mode 100644
index 06cce9aff79..00000000000
--- a/src/current/_includes/v2.1/orchestration/kubernetes-remove-nodes-insecure.md
+++ /dev/null
@@ -1,110 +0,0 @@
-To safely remove a node from your cluster, you must first decommission the node and only then adjust the `--replicas` value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node.
-
-{{site.data.alerts.callout_danger}}
-If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Decommission Nodes](remove-nodes.html).
-{{site.data.alerts.end}}
-
-1. Launch a temporary interactive pod and use the `cockroach node status` command to get the internal IDs of nodes:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \
- -- node status --insecure --host=cockroachdb-public
- ~~~
-
- ~~~
- id | address | build | started_at | updated_at | is_available | is_live
- +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
- 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
- 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
- 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
- 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
- (4 rows)
- ~~~
-
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \
- -- node status --insecure --host=my-release-cockroachdb-public
- ~~~
-
- ~~~
- id | address | build | started_at | updated_at | is_available | is_live
- +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
- 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
- 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
- 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
- 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
- (4 rows)
- ~~~
-
-
-2. Note the ID of the node with the highest number in its address (in this case, the address including `cockroachdb-3`) and use the [`cockroach node decommission`](view-node-details.html) command to decommission it:
-
- {{site.data.alerts.callout_info}}
- It's important to decommission the node with the highest number in its address because, when you reduce the `--replica` count, Kubernetes will remove the pod for that node.
- {{site.data.alerts.end}}
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \
- -- node decommission --insecure --host=cockroachdb-public
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \
- -- node decommission --insecure --host=my-release-cockroachdb-public
- ~~~
-
-
- You'll then see the decommissioning status print to `stderr` as it changes:
-
- ~~~
- id | is_live | replicas | is_decommissioning | is_draining
- +---+---------+----------+--------------------+-------------+
- 4 | true | 73 | true | false
- (1 row)
- ~~~
-
- Once the node has been fully decommissioned and stopped, you'll see a confirmation:
-
- ~~~
- id | is_live | replicas | is_decommissioning | is_draining
- +---+---------+----------+--------------------+-------------+
- 4 | true | 0 | true | false
- (1 row)
-
- No more data reported on target nodes. Please verify cluster health before removing the nodes.
- ~~~
-
-3. Once the node has been decommissioned, use the `kubectl scale` command to remove a pod from your StatefulSet:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl scale statefulset cockroachdb --replicas=3
- ~~~
-
- ~~~
- statefulset "cockroachdb" scaled
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl scale statefulset my-release-cockroachdb --replicas=3
- ~~~
-
- ~~~
- statefulset "my-release-cockroachdb" scaled
- ~~~
-
diff --git a/src/current/_includes/v2.1/orchestration/kubernetes-remove-nodes-secure.md b/src/current/_includes/v2.1/orchestration/kubernetes-remove-nodes-secure.md
deleted file mode 100644
index adf42307280..00000000000
--- a/src/current/_includes/v2.1/orchestration/kubernetes-remove-nodes-secure.md
+++ /dev/null
@@ -1,107 +0,0 @@
-To safely remove a node from your cluster, you must first decommission the node and only then adjust the `--replicas` value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node.
-
-{{site.data.alerts.callout_danger}}
-If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Decommission Nodes](remove-nodes.html).
-{{site.data.alerts.end}}
-
-1. Get a shell into the `cockroachdb-client-secure` pod you created earlier and use the `cockroach node status` command to get the internal IDs of nodes:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure -- ./cockroach node status --certs-dir=/cockroach-certs --host=cockroachdb-public
- ~~~
-
- ~~~
- id | address | build | started_at | updated_at | is_available | is_live
- +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
- 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
- 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
- 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
- 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
- (4 rows)
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure -- ./cockroach node status --certs-dir=/cockroach-certs --host=my-release-cockroachdb-public
- ~~~
-
- ~~~
- id | address | build | started_at | updated_at | is_available | is_live
- +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+
- 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true
- 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true
- 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true
- 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | v2.1.1 | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true
- (4 rows)
- ~~~
-
-
- The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required.
-
-2. Note the ID of the node with the highest number in its address (in this case, the address including `cockroachdb-3`) and use the [`cockroach node decommission`](view-node-details.html) command to decommission it:
-
- {{site.data.alerts.callout_info}}
- It's important to decommission the node with the highest number in its address because, when you reduce the `--replica` count, Kubernetes will remove the pod for that node.
- {{site.data.alerts.end}}
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure -- ./cockroach node decommission --insecure --host=cockroachdb-public
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure -- ./cockroach node decommission --insecure --host=my-release-cockroachdb-public
- ~~~
-
-
- You'll then see the decommissioning status print to `stderr` as it changes:
-
- ~~~
- id | is_live | replicas | is_decommissioning | is_draining
- +---+---------+----------+--------------------+-------------+
- 4 | true | 73 | true | false
- (1 row)
- ~~~
-
- Once the node has been fully decommissioned and stopped, you'll see a confirmation:
-
- ~~~
- id | is_live | replicas | is_decommissioning | is_draining
- +---+---------+----------+--------------------+-------------+
- 4 | true | 0 | true | false
- (1 row)
-
- No more data reported on target nodes. Please verify cluster health before removing the nodes.
- ~~~
-
-3. Once the node has been decommissioned, use the `kubectl scale` command to remove a pod from your StatefulSet:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl scale statefulset cockroachdb --replicas=3
- ~~~
-
- ~~~
- statefulset "cockroachdb" scaled
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl scale statefulset my-release-cockroachdb --replicas=3
- ~~~
-
- ~~~
- statefulset "my-release-cockroachdb" scaled
- ~~~
-
diff --git a/src/current/_includes/v2.1/orchestration/kubernetes-scale-cluster.md b/src/current/_includes/v2.1/orchestration/kubernetes-scale-cluster.md
deleted file mode 100644
index 61df086548b..00000000000
--- a/src/current/_includes/v2.1/orchestration/kubernetes-scale-cluster.md
+++ /dev/null
@@ -1,31 +0,0 @@
-The Kubernetes cluster contains 4 nodes, one master and 3 workers. Pods get placed only on worker nodes, so to ensure that you do not have two pods on the same node (as recommended in our [production best practices](recommended-production-settings.html)), you need to add a new worker node and then edit your StatefulSet configuration to add another pod.
-The Kubernetes cluster we created contains 3 nodes that pods can be run on. To ensure that you do not have two pods on the same node (as recommended in our [production best practices](recommended-production-settings.html)), you need to add a new node and then edit your StatefulSet configuration to add another pod.
-
-1. Add a worker node:
- - On GKE, [resize your cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster).
- - On GCE, resize your [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups/).
- - On AWS, resize your [Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html).
-
-2. Use the `kubectl scale` command to add a pod to your StatefulSet:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl scale statefulset cockroachdb --replicas=4
- ~~~
-
- ~~~
- statefulset "cockroachdb" scaled
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl scale statefulset my-release-cockroachdb --replicas=4
- ~~~
-
- ~~~
- statefulset "my-release-cockroachdb" scaled
- ~~~
-
diff --git a/src/current/_includes/v2.1/orchestration/kubernetes-simulate-failure.md b/src/current/_includes/v2.1/orchestration/kubernetes-simulate-failure.md
deleted file mode 100644
index d5f3e52884f..00000000000
--- a/src/current/_includes/v2.1/orchestration/kubernetes-simulate-failure.md
+++ /dev/null
@@ -1,56 +0,0 @@
-Based on the `replicas: 3` line in the StatefulSet configuration, Kubernetes ensures that three pods/nodes are running at all times. When a pod/node fails, Kubernetes automatically creates another pod/node with the same network identity and persistent storage.
-
-To see this in action:
-
-1. Terminate one of the CockroachDB nodes:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pod cockroachdb-2
- ~~~
-
- ~~~
- pod "cockroachdb-2" deleted
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl delete pod my-release-cockroachdb-2
- ~~~
-
- ~~~
- pod "my-release-cockroachdb-2" deleted
- ~~~
-
-
-
-2. In the Admin UI, the **Cluster Overview** will soon show one node as **Suspect**. As Kubernetes auto-restarts the node, watch how the node once again becomes healthy.
-
-3. Back in the terminal, verify that the pod was automatically restarted:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pod cockroachdb-2
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-2 1/1 Running 0 12s
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pod my-release-cockroachdb-2
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-2 1/1 Running 0 44s
- ~~~
-
diff --git a/src/current/_includes/v2.1/orchestration/kubernetes-upgrade-cluster.md b/src/current/_includes/v2.1/orchestration/kubernetes-upgrade-cluster.md
deleted file mode 100644
index 8d95600f9b6..00000000000
--- a/src/current/_includes/v2.1/orchestration/kubernetes-upgrade-cluster.md
+++ /dev/null
@@ -1,192 +0,0 @@
-As new versions of CockroachDB are released, it's strongly recommended to upgrade to newer versions in order to pick up bug fixes, performance improvements, and new features. The [general CockroachDB upgrade documentation](upgrade-cockroach-version.html) provides best practices for how to prepare for and execute upgrades of CockroachDB clusters, but the mechanism of actually stopping and restarting processes in Kubernetes is somewhat special.
-
-Kubernetes knows how to carry out a safe rolling upgrade process of the CockroachDB nodes. When you tell it to change the Docker image used in the CockroachDB StatefulSet, Kubernetes will go one-by-one, stopping a node, restarting it with the new image, and waiting for it to be ready to receive client requests before moving on to the next one. For more information, see [the Kubernetes documentation](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets).
-
-1. Decide how the upgrade will be finalized.
-
- {{site.data.alerts.callout_info}}
- This step is relevant only when upgrading from v2.0.x to v2.1. For upgrades within the v2.1.x series, skip this step.
- {{site.data.alerts.end}}
-
- By default, after all nodes are running the new version, the upgrade process will be **auto-finalized**. This will enable certain performance improvements and bug fixes introduced in v2.1. After finalization, however, it will no longer be possible to perform a downgrade to v2.0. In the event of a catastrophic failure or corruption, the only option will be to start a new cluster using the old binary and then restore from one of the backups created prior to performing the upgrade.
-
- We recommend disabling auto-finalization so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade:
-
- {% if page.secure == true %}
-
- 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](use-the-built-in-sql-client.html):
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=cockroachdb-public
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=my-release-cockroachdb-public
- ~~~
-
-
-
- {% else %}
-
- 1. Launch a temporary interactive pod and start the [built-in SQL client](use-the-built-in-sql-client.html) inside it:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \
- -- sql --insecure --host=cockroachdb-public
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \
- -- sql --insecure --host=my-release-cockroachdb-public
- ~~~
-
-
- {% endif %}
-
- 2. Set the `cluster.preserve_downgrade_option` [cluster setting](cluster-settings.html):
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SET CLUSTER SETTING cluster.preserve_downgrade_option = '2.0';
- ~~~
-
-2. Kick off the upgrade process by changing the desired Docker image. To do so, pick the version that you want to upgrade to, then run the following command, replacing "VERSION" with your desired new version:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl patch statefulset cockroachdb --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:VERSION"}]'
- ~~~
-
- ~~~
- statefulset "cockroachdb" patched
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl patch statefulset my-release-cockroachdb --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:VERSION"}]'
- ~~~
-
- ~~~
- statefulset "my-release0-cockroachdb" patched
- ~~~
-
-
-3. If you then check the status of your cluster's pods, you should see one of them being restarted:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 1/1 Running 0 2m
- cockroachdb-1 1/1 Running 0 2m
- cockroachdb-2 1/1 Running 0 2m
- cockroachdb-3 0/1 Terminating 0 1m
- ~~~
-
-
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 1/1 Running 0 2m
- my-release-cockroachdb-1 1/1 Running 0 2m
- my-release-cockroachdb-2 1/1 Running 0 2m
- my-release-cockroachdb-3 0/1 Terminating 0 1m
- ~~~
-
-
-4. This will continue until all of the pods have restarted and are running the new image. To check the image of each pod to determine whether they've all be upgraded, run:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}'
- ~~~
-
-
- ~~~
- cockroachdb-0 cockroachdb/cockroach:{{page.release_info.version}}
- cockroachdb-1 cockroachdb/cockroach:{{page.release_info.version}}
- cockroachdb-2 cockroachdb/cockroach:{{page.release_info.version}}
- cockroachdb-3 cockroachdb/cockroach:{{page.release_info.version}}
- ~~~
-
-
-
- ~~~
- my-release-cockroachdb-0 cockroachdb/cockroach:{{page.release_info.version}}
- my-release-cockroachdb-1 cockroachdb/cockroach:{{page.release_info.version}}
- my-release-cockroachdb-2 cockroachdb/cockroach:{{page.release_info.version}}
- my-release-cockroachdb-3 cockroachdb/cockroach:{{page.release_info.version}}
- ~~~
-
-
-5. Finish the upgrade.
-
- {{site.data.alerts.callout_info}}This step is relevant only when upgrading from v2.0.x to v2.1. For upgrades within the v2.1.x series, skip this step.{{site.data.alerts.end}}
-
- If you disabled auto-finalization in step 1 above, monitor the stability and performance of your cluster for as long as you require to feel comfortable with the upgrade (generally at least a day). If during this time you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary.
-
- Once you are satisfied with the new version, re-enable auto-finalization:
-
- {% if page.secure == true %}
-
- 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](use-the-built-in-sql-client.html):
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=cockroachdb-public
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=my-release-cockroachdb-public
- ~~~
-
-
- {% else %}
-
- 1. Launch a temporary interactive pod and start the [built-in SQL client](use-the-built-in-sql-client.html) inside it:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \
- -- sql --insecure --host=cockroachdb-public
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \
- -- sql --insecure --host=my-release-cockroachdb-public
- ~~~
-
-
- {% endif %}
-
- 2. Re-enable auto-finalization:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > RESET CLUSTER SETTING cluster.preserve_downgrade_option;
- ~~~
diff --git a/src/current/_includes/v2.1/orchestration/local-start-kubernetes.md b/src/current/_includes/v2.1/orchestration/local-start-kubernetes.md
deleted file mode 100644
index a417f835984..00000000000
--- a/src/current/_includes/v2.1/orchestration/local-start-kubernetes.md
+++ /dev/null
@@ -1,24 +0,0 @@
-## Before you begin
-
-Before getting started, it's helpful to review some Kubernetes-specific terminology:
-
-Feature | Description
---------|------------
-[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | This is the tool you'll use to run a Kubernetes cluster inside a VM on your local workstation.
-[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one or more Docker containers. In this tutorial, all pods will run on your local workstation, each containing one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4.
-[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5.
-[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.
When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted.
-[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node.
-
-## Step 1. Start Kubernetes
-
-1. Follow Kubernetes' [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install `minikube`, the tool used to run Kubernetes locally, for your OS. This includes installing a hypervisor and `kubectl`, the command-line tool used to manage Kubernetes from your local workstation.
-
- {{site.data.alerts.callout_info}}Make sure you install minikube version 0.21.0 or later. Earlier versions do not include a Kubernetes server that supports the maxUnavailability field and PodDisruptionBudget resource type used in the CockroachDB StatefulSet configuration.{{site.data.alerts.end}}
-
-2. Start a local Kubernetes cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ minikube start
- ~~~
diff --git a/src/current/_includes/v2.1/orchestration/monitor-cluster.md b/src/current/_includes/v2.1/orchestration/monitor-cluster.md
deleted file mode 100644
index ad0c5aabc01..00000000000
--- a/src/current/_includes/v2.1/orchestration/monitor-cluster.md
+++ /dev/null
@@ -1,37 +0,0 @@
-To access the cluster's [Admin UI](admin-ui-overview.html):
-
-1. Port-forward from your local machine to one of the pods:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl port-forward cockroachdb-0 8080
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl port-forward my-release-cockroachdb-0 8080
- ~~~
-
-
- ~~~
- Forwarding from 127.0.0.1:8080 -> 8080
- ~~~
-
- {{site.data.alerts.callout_info}}The port-forward command must be run on the same machine as the web browser in which you want to view the Admin UI. If you have been running these commands from a cloud instance or other non-local shell, you will not be able to view the UI without configuring kubectl locally and running the above port-forward command on your local machine.{{site.data.alerts.end}}
-
-{% if page.secure == true %}
-
-2. Go to https://localhost:8080 and log in with the username and password you created earlier.
-
-{% else %}
-
-2. Go to http://localhost:8080.
-
-{% endif %}
-
-3. In the UI, verify that the cluster is running as expected:
- - Click **View nodes list** on the right to ensure that all nodes successfully joined the cluster.
- - Click the **Databases** tab on the left to verify that `bank` is listed.
diff --git a/src/current/_includes/v2.1/orchestration/start-cockroachdb-helm-insecure.md b/src/current/_includes/v2.1/orchestration/start-cockroachdb-helm-insecure.md
deleted file mode 100644
index 23a766b9e64..00000000000
--- a/src/current/_includes/v2.1/orchestration/start-cockroachdb-helm-insecure.md
+++ /dev/null
@@ -1,97 +0,0 @@
-1. [Install the Helm client](https://docs.helm.sh/using_helm/#installing-the-helm-client).
-
-2. [Install the Helm server, known as Tiller](https://docs.helm.sh/using_helm/#installing-tiller).
-
- In the likely case that your Kubernetes cluster uses RBAC (e.g., if you are using GKE), you need to create [RBAC resources](https://docs.helm.sh/using_helm/#role-based-access-control) to grant Tiller access to the Kubernetes API:
-
- 1. Create a `rbac-config.yaml` file to define a role and service account:
-
- {% include copy-clipboard.html %}
- ~~~
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- name: tiller
- namespace: kube-system
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRoleBinding
- metadata:
- name: tiller
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: cluster-admin
- subjects:
- - kind: ServiceAccount
- name: tiller
- namespace: kube-system
- ~~~
-
- 2. Create the service account:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f rbac-config.yaml
- ~~~
-
- ~~~
- serviceaccount "tiller" created
- clusterrolebinding "tiller" created
- ~~~
-
- 3. Start the Helm server:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm init --service-account tiller
- ~~~
-
-3. Install the CockroachDB Helm chart, providing a "release" name to identify and track this particular deployment of the chart:
-
- {{site.data.alerts.callout_info}}
- This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands.
- {{site.data.alerts.end}}
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm install --name my-release cockroachdb/cockroachdb
- ~~~
-
- Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart.
-
- {{site.data.alerts.callout_info}}
- You can customize your deployment by passing [configuration parameters](https://github.com/cockroachdb/helm-charts/tree/master/cockroachdb#configuration) to `helm install` using the `--set key=value[,key=value]` flag. For a production cluster, you should consider modifying the `Storage` and `StorageClass` parameters. This chart defaults to 100 GiB of disk space per pod, but you may want more or less depending on your use case, and the default persistent volume `StorageClass` in your environment may not be what you want for a database (e.g., on GCE and Azure the default is not SSD).
- {{site.data.alerts.end}}
-
-4. Confirm that three pods are `Running` successfully:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 1/1 Running 0 48s
- my-release-cockroachdb-1 1/1 Running 0 47s
- my-release-cockroachdb-2 1/1 Running 0 47s
- ~~~
-
-5. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get persistentvolumes
- ~~~
-
- ~~~
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-64878ebf-f3f0-11e8-ab5b-42010a8e0035 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 51s
- pvc-64945b4f-f3f0-11e8-ab5b-42010a8e0035 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 51s
- pvc-649d920d-f3f0-11e8-ab5b-42010a8e0035 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 51s
- ~~~
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.1/orchestration/start-cockroachdb-helm-secure.md b/src/current/_includes/v2.1/orchestration/start-cockroachdb-helm-secure.md
deleted file mode 100644
index 573fe5201ff..00000000000
--- a/src/current/_includes/v2.1/orchestration/start-cockroachdb-helm-secure.md
+++ /dev/null
@@ -1,184 +0,0 @@
-1. [Install the Helm client](https://docs.helm.sh/using_helm/#installing-the-helm-client).
-
-2. [Install the Helm server, known as Tiller](https://docs.helm.sh/using_helm/#installing-tiller).
-
- In the likely case that your Kubernetes cluster uses RBAC (e.g., if you are using GKE), you need to create [RBAC resources](https://docs.helm.sh/using_helm/#role-based-access-control) to grant Tiller access to the Kubernetes API:
-
- 1. Create a `rbac-config.yaml` file to define a role and service account:
-
- {% include copy-clipboard.html %}
- ~~~
- apiVersion: v1
- kind: ServiceAccount
- metadata:
- name: tiller
- namespace: kube-system
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRoleBinding
- metadata:
- name: tiller
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: cluster-admin
- subjects:
- - kind: ServiceAccount
- name: tiller
- namespace: kube-system
- ~~~
-
- 2. Create the service account:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f rbac-config.yaml
- ~~~
-
- ~~~
- serviceaccount "tiller" created
- clusterrolebinding "tiller" created
- ~~~
-
- 3. Start the Helm server:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm init --service-account tiller
- ~~~
-
-3. Install the CockroachDB Helm chart, providing a "release" name to identify and track this particular deployment of the chart and setting the `Secure.Enabled` parameter to `true`:
-
- {{site.data.alerts.callout_info}}
- This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands.
- {{site.data.alerts.end}}
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ helm install --name my-release --set Secure.Enabled=true cockroachdb/cockroachdb
- ~~~
-
- Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart.
-
- {{site.data.alerts.callout_info}}
- You can customize your deployment by passing additional [configuration parameters](https://github.com/cockroachdb/helm-charts/tree/master/cockroachdb#configuration) to `helm install` using the `--set key=value[,key=value]` flag. For a production cluster, you should consider modifying the `Storage` and `StorageClass` parameters. This chart defaults to 100 GiB of disk space per pod, but you may want more or less depending on your use case, and the default persistent volume `StorageClass` in your environment may not be what you want for a database (e.g., on GCE and Azure the default is not SSD).
- {{site.data.alerts.end}}
-
-4. As each pod is created, it issues a Certificate Signing Request, or CSR, to have the node's certificate signed by the Kubernetes CA. You must manually check and approve each node's certificates, at which point the CockroachDB node is started in the pod.
-
- 1. Get the name of the `Pending` CSR for the first pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get csr
- ~~~
-
- ~~~
- NAME AGE REQUESTOR CONDITION
- default.client.root 21s system:serviceaccount:default:my-release-cockroachdb Pending
- default.node.my-release-cockroachdb-0 15s system:serviceaccount:default:my-release-cockroachdb Pending
- default.node.my-release-cockroachdb-1 16s system:serviceaccount:default:my-release-cockroachdb Pending
- default.node.my-release-cockroachdb-2 15s system:serviceaccount:default:my-release-cockroachdb Pending
- ~~~
-
- If you do not see a `Pending` CSR, wait a minute and try again.
-
- 2. Examine the CSR for the first pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl describe csr default.node.my-release-cockroachdb-0
- ~~~
-
- ~~~
- Name: default.node.my-release-cockroachdb-0
- Labels:
- Annotations:
- CreationTimestamp: Mon, 10 Dec 2018 05:36:35 -0500
- Requesting User: system:serviceaccount:default:my-release-cockroachdb
- Status: Pending
- Subject:
- Common Name: node
- Serial Number:
- Organization: Cockroach
- Subject Alternative Names:
- DNS Names: localhost
- my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local
- my-release-cockroachdb-0.my-release-cockroachdb
- my-release-cockroachdb-public
- my-release-cockroachdb-public.default.svc.cluster.local
- IP Addresses: 127.0.0.1
- 10.48.1.6
- Events:
- ~~~
-
- 3. If everything looks correct, approve the CSR for the first pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl certificate approve default.node.my-release-cockroachdb-0
- ~~~
-
- ~~~
- certificatesigningrequest "default.node.my-release-cockroachdb-0" approved
- ~~~
-
- 4. Repeat steps 1-3 for the other 2 pods.
-
-5. Confirm that three pods are `Running` successfully:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 0/1 Running 0 6m
- my-release-cockroachdb-1 0/1 Running 0 6m
- my-release-cockroachdb-2 0/1 Running 0 6m
- my-release-cockroachdb-init-hxzsc 0/1 Init:0/1 0 6m
- ~~~
-
-6. Approve the CSR for the one-off pod from which cluster initialization happens:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl certificate approve default.client.root
- ~~~
-
- ~~~
- certificatesigningrequest "default.client.root" approved
- ~~~
-
-7. Confirm that cluster initialization has completed successfully, with each pod showing `1/1` under `READY`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- my-release-cockroachdb-0 1/1 Running 0 8m
- my-release-cockroachdb-1 1/1 Running 0 8m
- my-release-cockroachdb-2 1/1 Running 0 8m
- ~~~
-
-8. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get persistentvolumes
- ~~~
-
- ~~~
- NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
- pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m
- pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m
- pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m
- ~~~
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.1/orchestration/start-cockroachdb-insecure.md b/src/current/_includes/v2.1/orchestration/start-cockroachdb-insecure.md
deleted file mode 100644
index aeab8b2e9e3..00000000000
--- a/src/current/_includes/v2.1/orchestration/start-cockroachdb-insecure.md
+++ /dev/null
@@ -1,101 +0,0 @@
-1. From your local workstation, use our [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml
- ~~~
-
- ~~~
- service "cockroachdb-public" created
- service "cockroachdb" created
- poddisruptionbudget "cockroachdb-budget" created
- statefulset "cockroachdb" created
- ~~~
-
- Alternatively, if you'd rather start with a configuration file that has been customized for performance:
-
- 1. Download our [performance version of `cockroachdb-statefulset-insecure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml
- ~~~
-
- 2. Modify the file wherever there is a `TODO` comment.
-
- 3. Use the file to create the StatefulSet and start the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f cockroachdb-statefulset-insecure.yaml
- ~~~
-
-2. Confirm that three pods are `Running` successfully. Note that they will not
- be considered `Ready` until after the cluster has been initialized:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 0/1 Running 0 2m
- cockroachdb-1 0/1 Running 0 2m
- cockroachdb-2 0/1 Running 0 2m
- ~~~
-
-3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get persistentvolumes
- ~~~
-
- ~~~
- NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
- pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s
- pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s
- pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s
- ~~~
-
-4. Use our [`cluster-init.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml) file to perform a one-time initialization that joins the nodes into a single cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml
- ~~~
-
- ~~~
- job "cluster-init" created
- ~~~
-
-5. Confirm that cluster initialization has completed successfully. The job
- should be considered successful and the CockroachDB pods should soon be
- considered `Ready`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get job cluster-init
- ~~~
-
- ~~~
- NAME DESIRED SUCCESSFUL AGE
- cluster-init 1 1 2m
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 1/1 Running 0 3m
- cockroachdb-1 1/1 Running 0 3m
- cockroachdb-2 1/1 Running 0 3m
- ~~~
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.1/orchestration/start-cockroachdb-secure.md b/src/current/_includes/v2.1/orchestration/start-cockroachdb-secure.md
deleted file mode 100644
index 0231d5a2e38..00000000000
--- a/src/current/_includes/v2.1/orchestration/start-cockroachdb-secure.md
+++ /dev/null
@@ -1,182 +0,0 @@
-{{site.data.alerts.callout_info}}
-If you want to use a different certificate authority than the one Kubernetes uses, or if your Kubernetes cluster doesn't fully support certificate-signing requests (e.g., in Amazon EKS), use [these configuration files](https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes/bring-your-own-certs) instead of the ones referenced below.
-{{site.data.alerts.end}}
-
-1. From your local workstation, use our [`cockroachdb-statefulset-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml
- ~~~
-
- ~~~
- serviceaccount "cockroachdb" created
- role "cockroachdb" created
- clusterrole "cockroachdb" created
- rolebinding "cockroachdb" created
- clusterrolebinding "cockroachdb" created
- service "cockroachdb-public" created
- service "cockroachdb" created
- poddisruptionbudget "cockroachdb-budget" created
- statefulset "cockroachdb" created
- ~~~
-
- Alternatively, if you'd rather start with a configuration file that has been customized for performance:
-
- 1. Download our [performance version of `cockroachdb-statefulset-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-statefulset-secure.yaml):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/performance/cockroachdb-statefulset-secure.yaml
- ~~~
-
- 2. Modify the file wherever there is a `TODO` comment.
-
- 3. Use the file to create the StatefulSet and start the cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f cockroachdb-statefulset-secure.yaml
- ~~~
-
-2. As each pod is created, it issues a Certificate Signing Request, or CSR, to have the node's certificate signed by the Kubernetes CA. You must manually check and approve each node's certificates, at which point the CockroachDB node is started in the pod.
-
- 1. Get the name of the `Pending` CSR for the first pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get csr
- ~~~
-
- ~~~
- NAME AGE REQUESTOR CONDITION
- default.node.cockroachdb-0 1m system:serviceaccount:default:default Pending
- node-csr-0Xmb4UTVAWMEnUeGbW4KX1oL4XV_LADpkwjrPtQjlZ4 4m kubelet Approved,Issued
- node-csr-NiN8oDsLhxn0uwLTWa0RWpMUgJYnwcFxB984mwjjYsY 4m kubelet Approved,Issued
- node-csr-aU78SxyU69pDK57aj6txnevr7X-8M3XgX9mTK0Hso6o 5m kubelet Approved,Issued
- ~~~
-
- If you do not see a `Pending` CSR, wait a minute and try again.
-
- 2. Examine the CSR for the first pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl describe csr default.node.cockroachdb-0
- ~~~
-
- ~~~
- Name: default.node.cockroachdb-0
- Labels:
- Annotations:
- CreationTimestamp: Thu, 09 Nov 2017 13:39:37 -0500
- Requesting User: system:serviceaccount:default:default
- Status: Pending
- Subject:
- Common Name: node
- Serial Number:
- Organization: Cockroach
- Subject Alternative Names:
- DNS Names: localhost
- cockroachdb-0.cockroachdb.default.svc.cluster.local
- cockroachdb-public
- IP Addresses: 127.0.0.1
- 10.48.1.6
- Events:
- ~~~
-
- 3. If everything looks correct, approve the CSR for the first pod:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl certificate approve default.node.cockroachdb-0
- ~~~
-
- ~~~
- certificatesigningrequest "default.node.cockroachdb-0" approved
- ~~~
-
- 4. Repeat steps 1-3 for the other 2 pods.
-
-3. Initialize the cluster:
-
- 1. Confirm that three pods are `Running` successfully. Note that they will not
- be considered `Ready` until after the cluster has been initialized:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 0/1 Running 0 2m
- cockroachdb-1 0/1 Running 0 2m
- cockroachdb-2 0/1 Running 0 2m
- ~~~
-
- 2. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get persistentvolumes
- ~~~
-
- ~~~
- NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE
- pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s
- pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s
- pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s
- ~~~
-
- 3. Use our [`cluster-init-secure.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml) file to perform a one-time initialization that joins the nodes into a single cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml
- ~~~
-
- ~~~
- job "cluster-init-secure" created
- ~~~
-
- 4. Approve the CSR for the one-off pod from which cluster initialization happens:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl certificate approve default.client.root
- ~~~
-
- ~~~
- certificatesigningrequest "default.client.root" approved
- ~~~
-
- 5. Confirm that cluster initialization has completed successfully. The job
- should be considered successful and the CockroachDB pods should soon be
- considered `Ready`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get job cluster-init-secure
- ~~~
-
- ~~~
- NAME DESIRED SUCCESSFUL AGE
- cluster-init-secure 1 1 2m
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl get pods
- ~~~
-
- ~~~
- NAME READY STATUS RESTARTS AGE
- cockroachdb-0 1/1 Running 0 3m
- cockroachdb-1 1/1 Running 0 3m
- cockroachdb-2 1/1 Running 0 3m
- ~~~
-
-{{site.data.alerts.callout_success}}
-The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.1/orchestration/start-kubernetes.md b/src/current/_includes/v2.1/orchestration/start-kubernetes.md
deleted file mode 100644
index 0fd64cbf6b2..00000000000
--- a/src/current/_includes/v2.1/orchestration/start-kubernetes.md
+++ /dev/null
@@ -1,69 +0,0 @@
-Choose whether you want to orchestrate CockroachDB with Kubernetes using the hosted Google Kubernetes Engine (GKE) service or manually on Google Compute Engine (GCE) or AWS. The instructions below will change slightly depending on your choice.
-
-- [Hosted GKE](#hosted-gke)
-- [Manual GCE](#manual-gce)
-- [Manual AWS](#manual-aws)
-
-### Hosted GKE
-
-1. Complete the **Before You Begin** steps described in the [Google Kubernetes Engine Quickstart](https://cloud.google.com/kubernetes-engine/docs/quickstart) documentation.
-
- This includes installing `gcloud`, which is used to create and delete Kubernetes Engine clusters, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation.
-
- {{site.data.alerts.callout_success}}The documentation offers the choice of using Google's Cloud Shell product or using a local shell on your machine. Choose to use a local shell if you want to be able to view the CockroachDB Admin UI using the steps in this guide.{{site.data.alerts.end}}
-
-2. From your local workstation, start the Kubernetes cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ gcloud container clusters create cockroachdb --machine-type n1-standard-4
- ~~~
-
- ~~~
- Creating cluster cockroachdb...done.
- ~~~
-
- This creates GKE instances and joins them into a single Kubernetes cluster named `cockroachdb`. The `--machine-type` flag tells the node pool to use the [`n1-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 15 GB memory), which meets our [recommended CPU and memory configuration](recommended-production-settings.html#basic-hardware-recommendations).
-
- The process can take a few minutes, so do not move on to the next step until you see a `Creating cluster cockroachdb...done` message and details about your cluster.
-
-3. Get the email address associated with your Google Cloud account:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ gcloud info | grep Account
- ~~~
-
- ~~~
- Account: [your.google.cloud.email@example.org]
- ~~~
-
- {{site.data.alerts.callout_danger}}
- This command returns your email address in all lowercase. However, in the next step, you must enter the address using the accurate capitalization. For example, if your address is YourName@example.com, you must use YourName@example.com and not yourname@example.com.
- {{site.data.alerts.end}}
-
-4. [Create the RBAC roles](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#prerequisites_for_using_role-based_access_control) CockroachDB needs for running on GKE, using the address from the previous step:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create clusterrolebinding $USER-cluster-admin-binding --clusterrole=cluster-admin --user=
- ~~~
-
- ~~~
- clusterrolebinding "cluster-admin-binding" created
- ~~~
-
-### Manual GCE
-
-From your local workstation, install prerequisites and start a Kubernetes cluster as described in the [Running Kubernetes on Google Compute Engine](https://v1-18.docs.kubernetes.io/docs/setup/production-environment/turnkey/gce/) documentation.
-
-The process includes:
-
-- Creating a Google Cloud Platform account, installing `gcloud`, and other prerequisites.
-- Downloading and installing the latest Kubernetes release.
-- Creating GCE instances and joining them into a single Kubernetes cluster.
-- Installing `kubectl`, the command-line tool used to manage Kubernetes from your workstation.
-
-### Manual AWS
-
-From your local workstation, install prerequisites and start a Kubernetes cluster as described in the [Running Kubernetes on AWS EC2](https://v1-18.docs.kubernetes.io/docs/setup/production-environment/turnkey/aws/) documentation.
diff --git a/src/current/_includes/v2.1/orchestration/test-cluster-insecure.md b/src/current/_includes/v2.1/orchestration/test-cluster-insecure.md
deleted file mode 100644
index e0758f4ded3..00000000000
--- a/src/current/_includes/v2.1/orchestration/test-cluster-insecure.md
+++ /dev/null
@@ -1,62 +0,0 @@
-1. Launch a temporary interactive pod and start the [built-in SQL client](use-the-built-in-sql-client.html) inside it:
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \
- -- sql --insecure --host=cockroachdb-public
- ~~~
-
-
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl run cockroachdb -it --image=cockroachdb/cockroach --rm --restart=Never \
- -- sql --insecure --host=my-release-cockroachdb-public
- ~~~
-
-
-2. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html):
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE bank;
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE bank.accounts (
- id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
- balance DECIMAL
- );
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bank.accounts (balance)
- VALUES
- (1000.50), (20000), (380), (500), (55000);
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SELECT * FROM bank.accounts;
- ~~~
-
- ~~~
- id | balance
- +--------------------------------------+---------+
- 6f123370-c48c-41ff-b384-2c185590af2b | 380
- 990c9148-1ea0-4861-9da7-fd0e65b0a7da | 1000.50
- ac31c671-40bf-4a7b-8bee-452cff8a4026 | 500
- d58afd93-5be9-42ba-b2e2-dc00dcedf409 | 20000
- e6d8f696-87f5-4d3c-a377-8e152fdc27f7 | 55000
- (5 rows)
- ~~~
-
-3. Exit the SQL shell and delete the temporary pod:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
diff --git a/src/current/_includes/v2.1/orchestration/test-cluster-secure.md b/src/current/_includes/v2.1/orchestration/test-cluster-secure.md
deleted file mode 100644
index 1d57b929fee..00000000000
--- a/src/current/_includes/v2.1/orchestration/test-cluster-secure.md
+++ /dev/null
@@ -1,183 +0,0 @@
-To use the built-in SQL client, you need to launch a pod that runs indefinitely with the `cockroach` binary inside it, get a shell into the pod, and then start the built-in SQL client.
-
-
-1. From your local workstation, use our [`client-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/client-secure.yaml) file to launch a pod and keep it running indefinitely:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml
- ~~~
-
- ~~~
- pod "cockroachdb-client-secure" created
- ~~~
-
- The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required.
-
-2. Get a shell into the pod and start the CockroachDB [built-in SQL client](use-the-built-in-sql-client.html):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=cockroachdb-public
- ~~~
-
- ~~~
- # Welcome to the cockroach SQL interface.
- # All statements must be terminated by a semicolon.
- # To exit: CTRL + D.
- #
- # Server version: CockroachDB CCL v1.1.2 (linux amd64, built 2017/11/02 19:32:03, go1.8.3) (same version as client)
- # Cluster ID: 3292fe08-939f-4638-b8dd-848074611dba
- #
- # Enter \? for a brief introduction.
- #
- root@cockroachdb-public:26257/>
- ~~~
-
-3. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html):
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE bank;
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL);
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bank.accounts VALUES (1, 1000.50);
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SELECT * FROM bank.accounts;
- ~~~
-
- ~~~
- +----+---------+
- | id | balance |
- +----+---------+
- | 1 | 1000.5 |
- +----+---------+
- (1 row)
- ~~~
-
-4. [Create a user with a password](create-user.html#create-a-user-with-a-password):
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE USER roach WITH PASSWORD 'Q7gc8rEdS';
- ~~~
-
- You will need this username and password to access the Admin UI later.
-
-5. Exit the SQL shell and pod:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
-
-
-
-1. From your local workstation, use our [`client-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/client-secure.yaml) file to launch a pod and keep it running indefinitely.
-
- 1. Download the file:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl -OOOOOOOOO \
- https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml
- ~~~
-
- 1. In the file, change `serviceAccountName: cockroachdb` to `serviceAccountName: my-release-cockroachdb`.
-
- 1. Use the file to launch a pod and keep it running indefinitely:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl create -f client-secure.yaml
- ~~~
-
- ~~~
- pod "cockroachdb-client-secure" created
- ~~~
-
- The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required.
-
-2. Get a shell into the pod and start the CockroachDB [built-in SQL client](use-the-built-in-sql-client.html):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --certs-dir=/cockroach-certs --host=my-release-cockroachdb-public
- ~~~
-
- ~~~
- # Welcome to the cockroach SQL interface.
- # All statements must be terminated by a semicolon.
- # To exit: CTRL + D.
- #
- # Server version: CockroachDB CCL v1.1.2 (linux amd64, built 2017/11/02 19:32:03, go1.8.3) (same version as client)
- # Cluster ID: 3292fe08-939f-4638-b8dd-848074611dba
- #
- # Enter \? for a brief introduction.
- #
- root@my-release-cockroachdb-public:26257/>
- ~~~
-
-3. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html):
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE bank;
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL);
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > INSERT INTO bank.accounts VALUES (1, 1000.50);
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SELECT * FROM bank.accounts;
- ~~~
-
- ~~~
- +----+---------+
- | id | balance |
- +----+---------+
- | 1 | 1000.5 |
- +----+---------+
- (1 row)
- ~~~
-
-4. [Create a user with a password](create-user.html#create-a-user-with-a-password):
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE USER roach WITH PASSWORD 'Q7gc8rEdS';
- ~~~
-
- You will need this username and password to access the Admin UI later.
-
-5. Exit the SQL shell and pod:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
-
-
-{{site.data.alerts.callout_success}}
-This pod will continue running indefinitely, so any time you need to reopen the built-in SQL client or run any other [`cockroach` client commands](cockroach-commands.html) (e.g., `cockroach node`), repeat step 2 using the appropriate `cockroach` command.
-
-If you'd prefer to delete the pod and recreate it when needed, run `kubectl delete pod cockroachdb-client-secure`.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.1/performance/check-rebalancing-after-partitioning.md b/src/current/_includes/v2.1/performance/check-rebalancing-after-partitioning.md
deleted file mode 100644
index 7db608b9dd4..00000000000
--- a/src/current/_includes/v2.1/performance/check-rebalancing-after-partitioning.md
+++ /dev/null
@@ -1,41 +0,0 @@
-Over the next minutes, CockroachDB will rebalance all partitions based on the constraints you defined.
-
-To check this at a high level, access the Web UI on any node at `:8080` and look at the **Node List**. You'll see that the range count is still close to even across all nodes but much higher than before partitioning:
-
-
-
-To check at a more granular level, SSH to one of the instances not running CockroachDB and run the `SHOW EXPERIMENTAL_RANGES` statement on the `vehicles` table:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql \
-{{page.certs}} \
---host= \
---database=movr \
---execute="SELECT * FROM \
-[SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles] \
-WHERE \"start_key\" IS NOT NULL \
- AND \"start_key\" NOT LIKE '%Prefix%';"
-~~~
-
-~~~
- start_key | end_key | range_id | replicas | lease_holder
-+------------------+----------------------------+----------+----------+--------------+
- /"boston" | /"boston"/PrefixEnd | 105 | {1,2,3} | 3
- /"los angeles" | /"los angeles"/PrefixEnd | 121 | {7,8,9} | 8
- /"new york" | /"new york"/PrefixEnd | 101 | {1,2,3} | 3
- /"san francisco" | /"san francisco"/PrefixEnd | 117 | {7,8,9} | 8
- /"seattle" | /"seattle"/PrefixEnd | 113 | {4,5,6} | 5
- /"washington dc" | /"washington dc"/PrefixEnd | 109 | {1,2,3} | 1
-(6 rows)
-~~~
-
-For reference, here's how the nodes map to zones:
-
-Node IDs | Zone
----------|-----
-1-3 | `us-east1-b` (South Carolina)
-4-6 | `us-west1-a` (Oregon)
-7-9 | `us-west2-a` (Los Angeles)
-
-We can see that, after partitioning, the replicas for New York, Boston, and Washington DC are located on nodes 1-3 in `us-east1-b`, replicas for Seattle are located on nodes 4-6 in `us-west1-a`, and replicas for San Francisco and Los Angeles are located on nodes 7-9 in `us-west2-a`.
diff --git a/src/current/_includes/v2.1/performance/check-rebalancing.md b/src/current/_includes/v2.1/performance/check-rebalancing.md
deleted file mode 100644
index 576565354db..00000000000
--- a/src/current/_includes/v2.1/performance/check-rebalancing.md
+++ /dev/null
@@ -1,33 +0,0 @@
-Since you started each node with the `--locality` flag set to its GCE zone, over the next minutes, CockroachDB will rebalance data evenly across the zones.
-
-To check this, access the Web UI on any node at `:8080` and look at the **Node List**. You'll see that the range count is more or less even across all nodes:
-
-
-
-For reference, here's how the nodes map to zones:
-
-Node IDs | Zone
----------|-----
-1-3 | `us-east1-b` (South Carolina)
-4-6 | `us-west1-a` (Oregon)
-7-9 | `us-west2-a` (Los Angeles)
-
-To verify even balancing at range level, SSH to one of the instances not running CockroachDB and run the `SHOW EXPERIMENTAL_RANGES` statement:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql \
-{{page.certs}} \
---host= \
---database=movr \
---execute="SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles;"
-~~~
-
-~~~
- start_key | end_key | range_id | replicas | lease_holder
-+-----------+---------+----------+----------+--------------+
- NULL | NULL | 33 | {3,4,7} | 7
-(1 row)
-~~~
-
-In this case, we can see that, for the single range containing `vehicles` data, one replica is in each zone, and the leaseholder is in the `us-west2-a` zone.
diff --git a/src/current/_includes/v2.1/performance/configure-network.md b/src/current/_includes/v2.1/performance/configure-network.md
deleted file mode 100644
index 91fdf87d5c1..00000000000
--- a/src/current/_includes/v2.1/performance/configure-network.md
+++ /dev/null
@@ -1,18 +0,0 @@
-CockroachDB requires TCP communication on two ports:
-
-- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster)
-- **8080** (`tcp:8080`) for accessing the Web UI
-
-Since GCE instances communicate on their internal IP addresses by default, you do not need to take any action to enable inter-node communication. However, if you want to access the Web UI from your local network, you must [create a firewall rule for your project](https://cloud.google.com/vpc/docs/using-firewalls):
-
-Field | Recommended Value
-------|------------------
-Name | **cockroachweb**
-Source filter | IP ranges
-Source IP ranges | Your local network's IP ranges
-Allowed protocols | **tcp:8080**
-Target tags | `cockroachdb`
-
-{{site.data.alerts.callout_info}}
-The **tag** feature will let you easily apply the rule to your instances.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.1/performance/import-movr.md b/src/current/_includes/v2.1/performance/import-movr.md
deleted file mode 100644
index 5d796bf47d2..00000000000
--- a/src/current/_includes/v2.1/performance/import-movr.md
+++ /dev/null
@@ -1,160 +0,0 @@
-Now you'll import Movr data representing users, vehicles, and rides in 3 eastern US cities (New York, Boston, and Washington DC) and 3 western US cities (Los Angeles, San Francisco, and Seattle).
-
-1. Still on the fourth instance, start the [built-in SQL shell](use-the-built-in-sql-client.html), pointing it at one of the CockroachDB nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql {{page.certs}} --host=
- ~~~
-
-2. Create the `movr` database and set it as the default:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE movr;
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SET DATABASE = movr;
- ~~~
-
-3. Use the [`IMPORT`](import.html) statement to create and populate the `users`, `vehicles,` and `rides` tables:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > IMPORT TABLE users (
- id UUID NOT NULL,
- city STRING NOT NULL,
- name STRING NULL,
- address STRING NULL,
- credit_card STRING NULL,
- CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC)
- )
- CSV DATA (
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/users/n1.0.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | system_records | bytes
- +--------------------+-----------+--------------------+------+---------------+----------------+--------+
- 390345990764396545 | succeeded | 1 | 1998 | 0 | 0 | 241052
- (1 row)
-
- Time: 2.882582355s
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > IMPORT TABLE vehicles (
- id UUID NOT NULL,
- city STRING NOT NULL,
- type STRING NULL,
- owner_id UUID NULL,
- creation_time TIMESTAMP NULL,
- status STRING NULL,
- ext JSON NULL,
- mycol STRING NULL,
- CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
- INDEX vehicles_auto_index_fk_city_ref_users (city ASC, owner_id ASC)
- )
- CSV DATA (
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/vehicles/n1.0.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | system_records | bytes
- +--------------------+-----------+--------------------+-------+---------------+----------------+---------+
- 390346109887250433 | succeeded | 1 | 19998 | 19998 | 0 | 3558767
- (1 row)
-
- Time: 5.803841493s
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > IMPORT TABLE rides (
- id UUID NOT NULL,
- city STRING NOT NULL,
- vehicle_city STRING NULL,
- rider_id UUID NULL,
- vehicle_id UUID NULL,
- start_address STRING NULL,
- end_address STRING NULL,
- start_time TIMESTAMP NULL,
- end_time TIMESTAMP NULL,
- revenue DECIMAL(10,2) NULL,
- CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
- INDEX rides_auto_index_fk_city_ref_users (city ASC, rider_id ASC),
- INDEX rides_auto_index_fk_vehicle_city_ref_vehicles (vehicle_city ASC, vehicle_id ASC),
- CONSTRAINT check_vehicle_city_city CHECK (vehicle_city = city)
- )
- CSV DATA (
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.0.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.1.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.2.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.3.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.4.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.5.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.6.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.7.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.8.csv',
- 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.9.csv'
- );
- ~~~
-
- ~~~
- job_id | status | fraction_completed | rows | index_entries | system_records | bytes
- +--------------------+-----------+--------------------+--------+---------------+----------------+-----------+
- 390346325693792257 | succeeded | 1 | 999996 | 1999992 | 0 | 339741841
- (1 row)
-
- Time: 44.620371424s
- ~~~
-
- {{site.data.alerts.callout_success}}
- You can observe the progress of imports as well as all schema change operations (e.g., adding secondary indexes) on the [**Jobs** page](admin-ui-jobs-page.html) of the Web UI.
- {{site.data.alerts.end}}
-
-7. Logically, there should be a number of [foreign key](foreign-key.html) relationships between the tables:
-
- Referencing columns | Referenced columns
- --------------------|-------------------
- `vehicles.city`, `vehicles.owner_id` | `users.city`, `users.id`
- `rides.city`, `rides.rider_id` | `users.city`, `users.id`
- `rides.vehicle_city`, `rides.vehicle_id` | `vehicles.city`, `vehicles.id`
-
- As mentioned earlier, it wasn't possible to put these relationships in place during `IMPORT`, but it was possible to create the required secondary indexes. Now, let's add the foreign key constraints:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > ALTER TABLE vehicles
- ADD CONSTRAINT fk_city_ref_users
- FOREIGN KEY (city, owner_id)
- REFERENCES users (city, id);
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > ALTER TABLE rides
- ADD CONSTRAINT fk_city_ref_users
- FOREIGN KEY (city, rider_id)
- REFERENCES users (city, id);
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > ALTER TABLE rides
- ADD CONSTRAINT fk_vehicle_city_ref_vehicles
- FOREIGN KEY (vehicle_city, vehicle_id)
- REFERENCES vehicles (city, id);
- ~~~
-
-4. Exit the built-in SQL shell:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > \q
- ~~~
diff --git a/src/current/_includes/v2.1/performance/overview.md b/src/current/_includes/v2.1/performance/overview.md
deleted file mode 100644
index 176915f8848..00000000000
--- a/src/current/_includes/v2.1/performance/overview.md
+++ /dev/null
@@ -1,35 +0,0 @@
-### Topology
-
-You'll start with a 3-node CockroachDB cluster in a single Google Compute Engine (GCE) zone, with an extra instance for running a client application workload:
-
-
-
-{{site.data.alerts.callout_info}}
-Within a single GCE zone, network latency between instances should be sub-millisecond.
-{{site.data.alerts.end}}
-
-You'll then scale the cluster to 9 nodes running across 3 GCE regions, with an extra instance in each region for a client application workload:
-
-
-
-To reproduce the performance demonstrated in this tutorial:
-
-- For each CockroachDB node, you'll use the [`n1-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 15 GB memory) with the Ubuntu 16.04 OS image and a [local SSD](https://cloud.google.com/compute/docs/disks/#localssds) disk.
-- For running the client application workload, you'll use smaller instances, such as `n1-standard-1`.
-
-### Schema
-
-Your schema and data will be based on our open-source, fictional peer-to-peer ride-sharing application,[MovR](https://github.com/cockroachdb/movr).
-
-
-
-A few notes about the schema:
-
-- There are just three self-explanatory tables: In essence, `users` represents the people registered for the service, `vehicles` represents the pool of vehicles for the service, and `rides` represents when and where users have participated.
-- Each table has a composite primary key, with `city` being first in the key. Although not necessary initially in the single-region deployment, once you scale the cluster to multiple regions, these compound primary keys will enable you to [geo-partition data at the row level](partitioning.html#partition-using-primary-key) by `city`. As such, this tutorial demonstrates a schema designed for future scaling.
-- The [`IMPORT`](import.html) feature you'll use to import the data does not support foreign keys, so you'll import the data without [foreign key constraints](foreign-key.html). However, the import will create the secondary indexes required to add the foreign keys later.
-- The `rides` table contains both `city` and the seemingly redundant `vehicle_city`. This redundancy is necessary because, while it is not possible to apply more than one foreign key constraint to a single column, you will need to apply two foreign key constraints to the `rides` table, and each will require city as part of the constraint. The duplicate `vehicle_city`, which is kept in sync with `city` via a [`CHECK` constraint](check.html), lets you overcome [this limitation](https://github.com/cockroachdb/cockroach/issues/23580).
-
-### Important concepts
-
-To understand the techniques in this tutorial, and to be able to apply them in your own scenarios, it's important to first understand [how reads and writes work in CockroachDB](architecture/reads-and-writes-overview.html). Review that document before getting started here.
diff --git a/src/current/_includes/v2.1/performance/partition-by-city.md b/src/current/_includes/v2.1/performance/partition-by-city.md
deleted file mode 100644
index 9498f02933f..00000000000
--- a/src/current/_includes/v2.1/performance/partition-by-city.md
+++ /dev/null
@@ -1,419 +0,0 @@
-For this service, the most effective technique for improving read and write latency is to [geo-partition](partitioning.html) the data by city. In essence, this means changing the way data is mapped to ranges. Instead of an entire table and its indexes mapping to a specific range or set of ranges, all rows in the table and its indexes with a given city will map to a range or set of ranges. Once ranges are defined in this way, we can then use the [replication zone](configure-replication-zones.html) feature to pin partitions to specific locations, ensuring that read and write requests from users in a specific city do not have to leave that region.
-
-1. Partitioning is an enterprise feature, so start off by [registering for a 30-day trial license](https://www.cockroachlabs.com/get-cockroachdb/enterprise/).
-
-2. Once you've received the trial license, SSH to any node in your cluster and [apply the license](enterprise-licensing.html#set-the-trial-or-enterprise-license-key):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --host= \
- --execute="SET CLUSTER SETTING cluster.organization = '';"
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --host= \
- --execute="SET CLUSTER SETTING enterprise.license = '';"
- ~~~
-
-3. Define partitions for all tables and their secondary indexes.
-
- Start with the `users` table:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER TABLE users \
- PARTITION BY LIST (city) ( \
- PARTITION new_york VALUES IN ('new york'), \
- PARTITION boston VALUES IN ('boston'), \
- PARTITION washington_dc VALUES IN ('washington dc'), \
- PARTITION seattle VALUES IN ('seattle'), \
- PARTITION san_francisco VALUES IN ('san francisco'), \
- PARTITION los_angeles VALUES IN ('los angeles') \
- );"
- ~~~
-
- Now define partitions for the `vehicles` table and its secondary indexes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER TABLE vehicles \
- PARTITION BY LIST (city) ( \
- PARTITION new_york VALUES IN ('new york'), \
- PARTITION boston VALUES IN ('boston'), \
- PARTITION washington_dc VALUES IN ('washington dc'), \
- PARTITION seattle VALUES IN ('seattle'), \
- PARTITION san_francisco VALUES IN ('san francisco'), \
- PARTITION los_angeles VALUES IN ('los angeles') \
- );"
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER INDEX vehicles_auto_index_fk_city_ref_users \
- PARTITION BY LIST (city) ( \
- PARTITION new_york_idx VALUES IN ('new york'), \
- PARTITION boston_idx VALUES IN ('boston'), \
- PARTITION washington_dc_idx VALUES IN ('washington dc'), \
- PARTITION seattle_idx VALUES IN ('seattle'), \
- PARTITION san_francisco_idx VALUES IN ('san francisco'), \
- PARTITION los_angeles_idx VALUES IN ('los angeles') \
- );"
- ~~~
-
- Next, define partitions for the `rides` table and its secondary indexes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER TABLE rides \
- PARTITION BY LIST (city) ( \
- PARTITION new_york VALUES IN ('new york'), \
- PARTITION boston VALUES IN ('boston'), \
- PARTITION washington_dc VALUES IN ('washington dc'), \
- PARTITION seattle VALUES IN ('seattle'), \
- PARTITION san_francisco VALUES IN ('san francisco'), \
- PARTITION los_angeles VALUES IN ('los angeles') \
- );"
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER INDEX rides_auto_index_fk_city_ref_users \
- PARTITION BY LIST (city) ( \
- PARTITION new_york_idx1 VALUES IN ('new york'), \
- PARTITION boston_idx1 VALUES IN ('boston'), \
- PARTITION washington_dc_idx1 VALUES IN ('washington dc'), \
- PARTITION seattle_idx1 VALUES IN ('seattle'), \
- PARTITION san_francisco_idx1 VALUES IN ('san francisco'), \
- PARTITION los_angeles_idx1 VALUES IN ('los angeles') \
- );"
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="ALTER INDEX rides_auto_index_fk_vehicle_city_ref_vehicles \
- PARTITION BY LIST (vehicle_city) ( \
- PARTITION new_york_idx2 VALUES IN ('new york'), \
- PARTITION boston_idx2 VALUES IN ('boston'), \
- PARTITION washington_dc_idx2 VALUES IN ('washington dc'), \
- PARTITION seattle_idx2 VALUES IN ('seattle'), \
- PARTITION san_francisco_idx2 VALUES IN ('san francisco'), \
- PARTITION los_angeles_idx2 VALUES IN ('los angeles') \
- );"
- ~~~
-
- Finally, drop an unused index on `rides` rather than partition it:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql \
- {{page.certs}} \
- --database=movr \
- --host= \
- --execute="DROP INDEX rides_start_time_idx;"
- ~~~
-
- {{site.data.alerts.callout_info}}
- The `rides` table contains 1 million rows, so dropping this index will take a few minutes.
- {{site.data.alerts.end}}
-
-7. Now [create replication zones](configure-replication-zones.html#create-a-replication-zone-for-a-table-or-secondary-index-partition) to require city data to be stored on specific nodes based on node locality.
-
- City | Locality
- -----|---------
- New York | `zone=us-east1-b`
- Boston | `zone=us-east1-b`
- Washington DC | `zone=us-east1-b`
- Seattle | `zone=us-west1-a`
- San Francisco | `zone=us-west2-a`
- Los Angeles | `zone=us-west2-a`
-
- {{site.data.alerts.callout_info}}
- Since our nodes are located in 3 specific GCE zones, we're only going to use the `zone=` portion of node locality. If we were using multiple zones per regions, we would likely use the `region=` portion of the node locality instead.
- {{site.data.alerts.end}}
-
- Start with the `users` table partitions:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- Move on to the `vehicles` table and secondary index partitions:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york_idx OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston_idx OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc_idx OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle_idx OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco_idx OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles_idx OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- Finish with the `rides` table and secondary index partitions:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york_idx1 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION new_york_idx2 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston_idx1 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION boston_idx2 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc_idx OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION washington_dc_idx2 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle_idx1 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION seattle_idx2 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco_idx1 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION san_francisco_idx2 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles_idx1 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --execute="ALTER PARTITION los_angeles_idx2 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \
- {{page.certs}} \
- --host=
- ~~~
diff --git a/src/current/_includes/v2.1/performance/scale-cluster.md b/src/current/_includes/v2.1/performance/scale-cluster.md
deleted file mode 100644
index e18069d5185..00000000000
--- a/src/current/_includes/v2.1/performance/scale-cluster.md
+++ /dev/null
@@ -1,61 +0,0 @@
-1. SSH to one of the `n1-standard-4` instances in the `us-west1-a` zone.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
-3. Run the [`cockroach start`](start-a-node.html) command:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- {{page.certs}} \
- --advertise-host= \
- --join= \
- --locality=cloud=gce,region=us-west1,zone=us-west1-a \
- --cache=.25 \
- --max-sql-memory=.25 \
- --background
- ~~~
-
-4. Repeat steps 1 - 3 for the other two `n1-standard-4` instances in the `us-west1-a` zone.
-
-5. SSH to one of the `n1-standard-4` instances in the `us-west2-a` zone.
-
-6. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
-7. Run the [`cockroach start`](start-a-node.html) command:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- {{page.certs}} \
- --advertise-host= \
- --join= \
- --locality=cloud=gce,region=us-west2,zone=us-west2-a \
- --cache=.25 \
- --max-sql-memory=.25 \
- --background
- ~~~
-
-8. Repeat steps 5 - 7 for the other two `n1-standard-4` instances in the `us-west2-a` zone.
diff --git a/src/current/_includes/v2.1/performance/start-cluster.md b/src/current/_includes/v2.1/performance/start-cluster.md
deleted file mode 100644
index 67b20c15192..00000000000
--- a/src/current/_includes/v2.1/performance/start-cluster.md
+++ /dev/null
@@ -1,60 +0,0 @@
-#### Start the nodes
-
-1. SSH to the first `n1-standard-4` instance.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
-3. Run the [`cockroach start`](start-a-node.html) command:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- {{page.certs}} \
- --advertise-host= \
- --join=:26257,:26257,:26257 \
- --locality=cloud=gce,region=us-east1,zone=us-east1-b \
- --cache=.25 \
- --max-sql-memory=.25 \
- --background
- ~~~
-
-4. Repeat steps 1 - 3 for the other two `n1-standard-4` instances. Be sure to adjust the `--advertise-addr` flag each time.
-
-#### Initialize the cluster
-
-1. SSH to the fourth instance, the one not running a CockroachDB node.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
-4. Run the [`cockroach init`](initialize-a-cluster.html) command:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach init {{page.certs}} --host=
- ~~~
-
- Each node then prints helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the Web UI, and the SQL URL for clients.
diff --git a/src/current/_includes/v2.1/performance/test-performance-after-partitioning.md b/src/current/_includes/v2.1/performance/test-performance-after-partitioning.md
deleted file mode 100644
index 16c07a9f92d..00000000000
--- a/src/current/_includes/v2.1/performance/test-performance-after-partitioning.md
+++ /dev/null
@@ -1,93 +0,0 @@
-After partitioning, reads and writers for a specific city will be much faster because all replicas for that city are now located on the nodes closest to the city.
-
-To check this, let's repeat a few of the read and write queries that we executed before partitioning in [step 12](#step-12-test-performance).
-
-#### Reads
-
-Again imagine we are a Movr administrator in New York, and we want to get the IDs and descriptions of all New York-based bikes that are currently in use:
-
-1. SSH to the instance in `us-east1-b` with the Python client.
-
-2. Query for the data:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ {{page.app}} \
- --host= \
- --statement="SELECT id, ext FROM vehicles \
- WHERE city = 'new york' \
- AND type = 'bike' \
- AND status = 'in_use'" \
- --repeat=50 \
- --times
- ~~~
-
- ~~~
- Result:
- ['id', 'ext']
- ['0068ee24-2dfb-437d-9a5d-22bb742d519e', "{u'color': u'green', u'brand': u'Kona'}"]
- ['01b80764-283b-4232-8961-a8d6a4121a08', "{u'color': u'green', u'brand': u'Pinarello'}"]
- ['02a39628-a911-4450-b8c0-237865546f7f', "{u'color': u'black', u'brand': u'Schwinn'}"]
- ['02eb2a12-f465-4575-85f8-a4b77be14c54', "{u'color': u'black', u'brand': u'Pinarello'}"]
- ['02f2fcc3-fea6-4849-a3a0-dc60480fa6c2', "{u'color': u'red', u'brand': u'FujiCervelo'}"]
- ['034d42cf-741f-428c-bbbb-e31820c68588', "{u'color': u'yellow', u'brand': u'Santa Cruz'}"]
- ...
-
- Times (milliseconds):
- [20.065784454345703, 7.866144180297852, 8.362054824829102, 9.08803939819336, 7.925987243652344, 7.543087005615234, 7.786035537719727, 8.227825164794922, 7.907867431640625, 7.654905319213867, 7.793903350830078, 7.627964019775391, 7.833957672119141, 7.858037948608398, 7.474184036254883, 9.459972381591797, 7.726192474365234, 7.194995880126953, 7.364034652709961, 7.25102424621582, 7.650852203369141, 7.663965225219727, 9.334087371826172, 7.810115814208984, 7.543087005615234, 7.134914398193359, 7.922887802124023, 7.220029830932617, 7.606029510498047, 7.208108901977539, 7.333993911743164, 7.464170455932617, 7.679939270019531, 7.436990737915039, 7.62486457824707, 7.235050201416016, 7.420063018798828, 7.795095443725586, 7.39598274230957, 7.546901702880859, 7.582187652587891, 7.9669952392578125, 7.418155670166016, 7.539033889770508, 7.805109024047852, 7.086992263793945, 7.069826126098633, 7.833957672119141, 7.43412971496582, 7.035017013549805]
-
- Median time (milliseconds):
- 7.62641429901
- ~~~
-
-Before partitioning, this query took a median time of 72.02ms. After partitioning, the query took a median time of only 7.62ms.
-
-#### Writes
-
-Now let's again imagine 100 people in New York and 100 people in Seattle and 100 people in New York want to create new Movr accounts:
-
-1. SSH to the instance in `us-west1-a` with the Python client.
-
-2. Create 100 Seattle-based users:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- {{page.app}} \
- --host= \
- --statement="INSERT INTO users VALUES (gen_random_uuid(), 'seattle', 'Seatller', '111 East Street', '1736352379937347')" \
- --repeat=100 \
- --times
- ~~~
-
- ~~~
- Times (milliseconds):
- [41.8248176574707, 9.701967239379883, 8.725166320800781, 9.058952331542969, 7.819175720214844, 6.247997283935547, 10.265827178955078, 7.627964019775391, 9.120941162109375, 7.977008819580078, 9.247064590454102, 8.929967880249023, 9.610176086425781, 14.40286636352539, 8.588075637817383, 8.67319107055664, 9.417057037353516, 7.652044296264648, 8.917093276977539, 9.135961532592773, 8.604049682617188, 9.220123291015625, 7.578134536743164, 9.096860885620117, 8.942842483520508, 8.63790512084961, 7.722139358520508, 13.59701156616211, 9.176015853881836, 11.484146118164062, 9.212017059326172, 7.563114166259766, 8.793115615844727, 8.80289077758789, 7.827043533325195, 7.6389312744140625, 17.47584342956543, 9.436845779418945, 7.63392448425293, 8.594989776611328, 9.002208709716797, 8.93402099609375, 8.71896743774414, 8.76307487487793, 8.156061172485352, 8.729934692382812, 8.738040924072266, 8.25190544128418, 8.971929550170898, 7.460832595825195, 8.889198303222656, 8.45789909362793, 8.761167526245117, 10.223865509033203, 8.892059326171875, 8.961915969848633, 8.968114852905273, 7.750988006591797, 7.761955261230469, 9.199142456054688, 9.02700424194336, 9.509086608886719, 9.428977966308594, 7.902860641479492, 8.940935134887695, 8.615970611572266, 8.75401496887207, 7.906913757324219, 8.179187774658203, 11.447906494140625, 8.71419906616211, 9.202003479003906, 9.263038635253906, 9.089946746826172, 8.92496109008789, 10.32114028930664, 7.913827896118164, 9.464025497436523, 10.612010955810547, 8.78596305847168, 8.878946304321289, 7.575035095214844, 10.657072067260742, 8.777856826782227, 8.649110794067383, 9.012937545776367, 8.931875228881836, 9.31406021118164, 9.396076202392578, 8.908987045288086, 8.002996444702148, 9.089946746826172, 7.5588226318359375, 8.918046951293945, 12.117862701416016, 7.266998291015625, 8.074045181274414, 8.955001831054688, 8.868932723999023, 8.755922317504883]
-
- Median time (milliseconds):
- 8.90052318573
- ~~~
-
- Before partitioning, this query took a median time of 48.40ms. After partitioning, the query took a median time of only 8.90ms.
-
-3. SSH to the instance in `us-east1-b` with the Python client.
-
-4. Create 100 new NY-based users:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- {{page.app}} \
- --host= \
- --statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 West Street', '9822222379937347')" \
- --repeat=100 \
- --times
- ~~~
-
- ~~~
- Times (milliseconds):
- [276.3068675994873, 9.830951690673828, 8.772134780883789, 9.304046630859375, 8.24880599975586, 7.959842681884766, 7.848978042602539, 7.879018783569336, 7.754087448120117, 10.724067687988281, 13.960123062133789, 9.825944900512695, 9.60993766784668, 9.273052215576172, 9.41920280456543, 8.040904998779297, 16.484975814819336, 10.178089141845703, 8.322000503540039, 9.468793869018555, 8.002042770385742, 9.185075759887695, 9.54294204711914, 9.387016296386719, 9.676933288574219, 13.051986694335938, 9.506940841674805, 12.327909469604492, 10.377168655395508, 15.023946762084961, 9.985923767089844, 7.853031158447266, 9.43303108215332, 9.164094924926758, 10.941028594970703, 9.37199592590332, 12.359857559204102, 8.975028991699219, 7.728099822998047, 8.310079574584961, 9.792089462280273, 9.448051452636719, 8.057117462158203, 9.37795639038086, 9.753942489624023, 9.576082229614258, 8.192062377929688, 9.392023086547852, 7.97581672668457, 8.165121078491211, 9.660959243774414, 8.270978927612305, 9.901046752929688, 8.085966110229492, 10.581016540527344, 9.831905364990234, 7.883787155151367, 8.077859878540039, 8.161067962646484, 10.02812385559082, 7.9898834228515625, 9.840965270996094, 9.452104568481445, 9.747028350830078, 9.003162384033203, 9.206056594848633, 9.274005889892578, 7.8449249267578125, 8.827924728393555, 9.322881698608398, 12.08186149597168, 8.76307487487793, 8.353948593139648, 8.182048797607422, 7.736921310424805, 9.31406021118164, 9.263992309570312, 9.282112121582031, 7.823944091796875, 9.11712646484375, 8.099079132080078, 9.156942367553711, 8.363962173461914, 10.974884033203125, 8.729934692382812, 9.2620849609375, 9.27591323852539, 8.272886276245117, 8.25190544128418, 8.093118667602539, 9.259939193725586, 8.413076400756836, 8.198976516723633, 9.95182991027832, 8.024930953979492, 8.895158767700195, 8.243083953857422, 9.076833724975586, 9.994029998779297, 10.149955749511719]
-
- Median time (milliseconds):
- 9.26303863525
- ~~~
-
- Before partitioning, this query took a median time of 116.86ms. After partitioning, the query took a median time of only 9.26ms.
diff --git a/src/current/_includes/v2.1/performance/test-performance.md b/src/current/_includes/v2.1/performance/test-performance.md
deleted file mode 100644
index 2009ac9653f..00000000000
--- a/src/current/_includes/v2.1/performance/test-performance.md
+++ /dev/null
@@ -1,146 +0,0 @@
-In general, all of the tuning techniques featured in the single-region scenario above still apply in a multi-region deployment. However, the fact that data and leaseholders are spread across the US means greater latencies in many cases.
-
-#### Reads
-
-For example, imagine we are a Movr administrator in New York, and we want to get the IDs and descriptions of all New York-based bikes that are currently in use:
-
-1. SSH to the instance in `us-east1-b` with the Python client.
-
-2. Query for the data:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ {{page.app}} \
- --host= \
- --statement="SELECT id, ext FROM vehicles \
- WHERE city = 'new york' \
- AND type = 'bike' \
- AND status = 'in_use'" \
- --repeat=50 \
- --times
- ~~~
-
- ~~~
- Result:
- ['id', 'ext']
- ['0068ee24-2dfb-437d-9a5d-22bb742d519e', "{u'color': u'green', u'brand': u'Kona'}"]
- ['01b80764-283b-4232-8961-a8d6a4121a08', "{u'color': u'green', u'brand': u'Pinarello'}"]
- ['02a39628-a911-4450-b8c0-237865546f7f', "{u'color': u'black', u'brand': u'Schwinn'}"]
- ['02eb2a12-f465-4575-85f8-a4b77be14c54', "{u'color': u'black', u'brand': u'Pinarello'}"]
- ['02f2fcc3-fea6-4849-a3a0-dc60480fa6c2', "{u'color': u'red', u'brand': u'FujiCervelo'}"]
- ['034d42cf-741f-428c-bbbb-e31820c68588', "{u'color': u'yellow', u'brand': u'Santa Cruz'}"]
- ...
-
- Times (milliseconds):
- [933.8209629058838, 72.02410697937012, 72.45206832885742, 72.39294052124023, 72.8158950805664, 72.07584381103516, 72.21412658691406, 71.96712493896484, 71.75517082214355, 72.16811180114746, 71.78592681884766, 72.91603088378906, 71.91109657287598, 71.4719295501709, 72.40676879882812, 71.8080997467041, 71.84004783630371, 71.98500633239746, 72.40891456604004, 73.75001907348633, 71.45905494689941, 71.53081893920898, 71.46596908569336, 72.07608222961426, 71.94995880126953, 71.41804695129395, 71.29096984863281, 72.11899757385254, 71.63381576538086, 71.3050365447998, 71.83194160461426, 71.20394706726074, 70.9981918334961, 72.79205322265625, 72.63493537902832, 72.15285301208496, 71.8698501586914, 72.30591773986816, 71.53582572937012, 72.69001007080078, 72.03006744384766, 72.56317138671875, 71.61688804626465, 72.17121124267578, 70.20092010498047, 72.12018966674805, 73.34589958190918, 73.01592826843262, 71.49410247802734, 72.19099998474121]
-
- Median time (milliseconds):
- 72.0270872116
- ~~~
-
-As we saw earlier, the leaseholder for the `vehicles` table is in `us-west2-a` (Los Angeles), so our query had to go from the gateway node in `us-east1-b` all the way to the west coast and then back again before returning data to the client.
-
-For contrast, imagine we are now a Movr administrator in Los Angeles, and we want to get the IDs and descriptions of all Los Angeles-based bikes that are currently in use:
-
-1. SSH to the instance in `us-west2-a` with the Python client.
-
-2. Query for the data:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ {{page.app}} \
- --host= \
- --statement="SELECT id, ext FROM vehicles \
- WHERE city = 'los angeles' \
- AND type = 'bike' \
- AND status = 'in_use'" \
- --repeat=50 \
- --times
- ~~~
-
- ~~~
- Result:
- ['id', 'ext']
- ['00078349-94d4-43e6-92be-8b0d1ac7ee9f', "{u'color': u'blue', u'brand': u'Merida'}"]
- ['003f84c4-fa14-47b2-92d4-35a3dddd2d75', "{u'color': u'red', u'brand': u'Kona'}"]
- ['0107a133-7762-4392-b1d9-496eb30ee5f9', "{u'color': u'yellow', u'brand': u'Kona'}"]
- ['0144498b-4c4f-4036-8465-93a6bea502a3', "{u'color': u'blue', u'brand': u'Pinarello'}"]
- ['01476004-fb10-4201-9e56-aadeb427f98a', "{u'color': u'black', u'brand': u'Merida'}"]
-
- Times (milliseconds):
- [782.6759815216064, 8.564949035644531, 8.226156234741211, 7.949113845825195, 7.86590576171875, 7.842063903808594, 7.674932479858398, 7.555961608886719, 7.642984390258789, 8.024930953979492, 7.717132568359375, 8.46409797668457, 7.520914077758789, 7.6541900634765625, 7.458925247192383, 7.671833038330078, 7.740020751953125, 7.771015167236328, 7.598161697387695, 8.411169052124023, 7.408857345581055, 7.469892501831055, 7.524967193603516, 7.764101028442383, 7.750988006591797, 7.2460174560546875, 6.927967071533203, 7.822990417480469, 7.27391242980957, 7.730960845947266, 7.4710845947265625, 7.4310302734375, 7.33494758605957, 7.455110549926758, 7.021188735961914, 7.083892822265625, 7.812976837158203, 7.625102996826172, 7.447957992553711, 7.179021835327148, 7.504940032958984, 7.224082946777344, 7.257938385009766, 7.714986801147461, 7.4939727783203125, 7.6160430908203125, 7.578849792480469, 7.890939712524414, 7.546901702880859, 7.411956787109375]
-
- Median time (milliseconds):
- 7.6071023941
- ~~~
-
-Because the leaseholder for `vehicles` is in the same zone as the client request, this query took just 7.60ms compared to the similar query in New York that took 72.02ms.
-
-#### Writes
-
-The geographic distribution of data impacts write performance as well. For example, imagine 100 people in Seattle and 100 people in New York want to create new Movr accounts:
-
-1. SSH to the instance in `us-west1-a` with the Python client.
-
-2. Create 100 Seattle-based users:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- {{page.app}} \
- --host= \
- --statement="INSERT INTO users VALUES (gen_random_uuid(), 'seattle', 'Seatller', '111 East Street', '1736352379937347')" \
- --repeat=100 \
- --times
- ~~~
-
- ~~~
- Times (milliseconds):
- [277.4538993835449, 50.12702941894531, 47.75214195251465, 48.13408851623535, 47.872066497802734, 48.65407943725586, 47.78695106506348, 49.14689064025879, 52.770137786865234, 49.00097846984863, 48.68602752685547, 47.387123107910156, 47.36208915710449, 47.6841926574707, 46.49209976196289, 47.06096649169922, 46.753883361816406, 46.304941177368164, 48.90894889831543, 48.63715171813965, 48.37393760681152, 49.23295974731445, 50.13418197631836, 48.310041427612305, 48.57516288757324, 47.62911796569824, 47.77693748474121, 47.505855560302734, 47.89996147155762, 49.79205131530762, 50.76479911804199, 50.21500587463379, 48.73299598693848, 47.55592346191406, 47.35088348388672, 46.7071533203125, 43.00808906555176, 43.1060791015625, 46.02813720703125, 47.91092872619629, 68.71294975280762, 49.241065979003906, 48.9039421081543, 47.82295227050781, 48.26998710632324, 47.631025314331055, 64.51892852783203, 48.12812805175781, 67.33417510986328, 48.603057861328125, 50.31013488769531, 51.02396011352539, 51.45716667175293, 50.85396766662598, 49.07512664794922, 47.49894142150879, 44.67201232910156, 43.827056884765625, 44.412851333618164, 46.69189453125, 49.55601692199707, 49.16882514953613, 49.88598823547363, 49.31306838989258, 46.875, 46.69594764709473, 48.31886291503906, 48.378944396972656, 49.0570068359375, 49.417972564697266, 48.22111129760742, 50.662994384765625, 50.58097839355469, 75.44088363647461, 51.05400085449219, 50.85110664367676, 48.187971115112305, 56.7781925201416, 42.47403144836426, 46.2191104888916, 53.96890640258789, 46.697139739990234, 48.99096488952637, 49.1330623626709, 46.34690284729004, 47.09315299987793, 46.39410972595215, 46.51689529418945, 47.58000373840332, 47.924041748046875, 48.426151275634766, 50.22597312927246, 50.1859188079834, 50.37498474121094, 49.861907958984375, 51.477909088134766, 73.09293746948242, 48.779964447021484, 45.13692855834961, 42.2968864440918]
-
- Median time (milliseconds):
- 48.4025478363
- ~~~
-
-3. SSH to the instance in `us-east1-b` with the Python client.
-
-4. Create 100 new NY-based users:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- {{page.app}} \
- --host= \
- --statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 West Street', '9822222379937347')" \
- --repeat=100 \
- --times
- ~~~
-
- ~~~
- Times (milliseconds):
- [131.05082511901855, 116.88899993896484, 115.15498161315918, 117.095947265625, 121.04082107543945, 115.8750057220459, 113.80696296691895, 113.05880546569824, 118.41201782226562, 125.30899047851562, 117.5389289855957, 115.23890495300293, 116.84799194335938, 120.0411319732666, 115.62800407409668, 115.08989334106445, 113.37089538574219, 115.15498161315918, 115.96989631652832, 133.1961154937744, 114.25995826721191, 118.09396743774414, 122.24102020263672, 116.14608764648438, 114.80998992919922, 131.9139003753662, 114.54391479492188, 115.15307426452637, 116.7759895324707, 135.10799407958984, 117.18511581420898, 120.15485763549805, 118.0570125579834, 114.52388763427734, 115.28396606445312, 130.00011444091797, 126.45292282104492, 142.69423484802246, 117.60401725769043, 134.08493995666504, 117.47002601623535, 115.75007438659668, 117.98381805419922, 115.83089828491211, 114.88890647888184, 113.23404312133789, 121.1700439453125, 117.84791946411133, 115.35286903381348, 115.0820255279541, 116.99700355529785, 116.67394638061523, 116.1041259765625, 114.67289924621582, 112.98894882202148, 117.1119213104248, 119.78602409362793, 114.57300186157227, 129.58717346191406, 118.37983131408691, 126.68204307556152, 118.30306053161621, 113.27195167541504, 114.22920227050781, 115.80777168273926, 116.81294441223145, 114.76683616638184, 115.1430606842041, 117.29192733764648, 118.24417114257812, 116.56999588012695, 113.8620376586914, 114.88819122314453, 120.80597877502441, 132.39002227783203, 131.00910186767578, 114.56179618835449, 117.03896522521973, 117.72680282592773, 115.6010627746582, 115.27681350708008, 114.52317237854004, 114.87483978271484, 117.78903007507324, 116.65701866149902, 122.6949691772461, 117.65193939208984, 120.5449104309082, 115.61179161071777, 117.54202842712402, 114.70890045166016, 113.58809471130371, 129.7171115875244, 117.57993698120117, 117.1119213104248, 117.64001846313477, 140.66505432128906, 136.41691207885742, 116.24789237976074, 115.19908905029297]
-
- Median time (milliseconds):
- 116.868495941
- ~~~
-
-It took 48.40ms to create a user in Seattle and 116.86ms to create a user in New York. To better understand this discrepancy, let's look at the distribution of data for the `users` table:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach sql \
-{{page.certs}} \
---host= \
---database=movr \
---execute="SHOW EXPERIMENTAL_RANGES FROM TABLE users;"
-~~~
-
-~~~
- start_key | end_key | range_id | replicas | lease_holder
-+-----------+---------+----------+----------+--------------+
- NULL | NULL | 49 | {2,6,8} | 6
-(1 row)
-~~~
-
-For the single range containing `users` data, one replica is in each zone, with the leaseholder in the `us-west1-a` zone. This means that:
-
-- When creating a user in Seattle, the request doesn't have to leave the zone to reach the leaseholder. However, since a write requires consensus from its replica group, the write has to wait for confirmation from either the replica in `us-west1-b` (Los Angeles) or `us-east1-b` (New York) before committing and then returning confirmation to the client.
-- When creating a user in New York, there are more network hops and, thus, increased latency. The request first needs to travel across the continent to the leaseholder in `us-west1-a`. It then has to wait for confirmation from either the replica in `us-west1-b` (Los Angeles) or `us-east1-b` (New York) before committing and then returning confirmation to the client back in the east.
diff --git a/src/current/_includes/v2.1/performance/tuning-secure.py b/src/current/_includes/v2.1/performance/tuning-secure.py
deleted file mode 100644
index a644dbb1c87..00000000000
--- a/src/current/_includes/v2.1/performance/tuning-secure.py
+++ /dev/null
@@ -1,77 +0,0 @@
-#!/usr/bin/env python
-
-import argparse
-import psycopg2
-import time
-
-parser = argparse.ArgumentParser(
- description="test performance of statements against movr database")
-parser.add_argument("--host", required=True,
- help="ip address of one of the CockroachDB nodes")
-parser.add_argument("--statement", required=True,
- help="statement to execute")
-parser.add_argument("--repeat", type=int,
- help="number of times to repeat the statement", default = 20)
-parser.add_argument("--times",
- help="print time for each repetition of the statement", action="store_true")
-parser.add_argument("--cumulative",
- help="print cumulative time for all repetitions of the statement", action="store_true")
-args = parser.parse_args()
-
-conn = psycopg2.connect(
- database='movr',
- user='root',
- host=args.host,
- port=26257,
- sslmode='require',
- sslrootcert='certs/ca.crt',
- sslkey='certs/client.root.key',
- sslcert='certs/client.root.crt'
-)
-conn.set_session(autocommit=True)
-cur = conn.cursor()
-
-def median(lst):
- n = len(lst)
- if n < 1:
- return None
- if n % 2 == 1:
- return sorted(lst)[n//2]
- else:
- return sum(sorted(lst)[n//2-1:n//2+1])/2.0
-
-times = list()
-for n in range(args.repeat):
- start = time.time()
- statement = args.statement
- cur.execute(statement)
- if n < 1:
- if cur.description is not None:
- colnames = [desc[0] for desc in cur.description]
- print("")
- print("Result:")
- print(colnames)
- rows = cur.fetchall()
- for row in rows:
- print([str(cell) for cell in row])
- end = time.time()
- times.append((end - start)* 1000)
-
-cur.close()
-conn.close()
-
-print("")
-if args.times:
- print("Times (milliseconds):")
- print(times)
- print("")
-# print("Average time (milliseconds):")
-# print(float(sum(times))/len(times))
-# print("")
-print("Median time (milliseconds):")
-print(median(times))
-print("")
-if args.cumulative:
- print("Cumulative time (milliseconds):")
- print(sum(times))
- print("")
diff --git a/src/current/_includes/v2.1/performance/tuning.py b/src/current/_includes/v2.1/performance/tuning.py
deleted file mode 100644
index dcb567dad91..00000000000
--- a/src/current/_includes/v2.1/performance/tuning.py
+++ /dev/null
@@ -1,73 +0,0 @@
-#!/usr/bin/env python
-
-import argparse
-import psycopg2
-import time
-
-parser = argparse.ArgumentParser(
- description="test performance of statements against movr database")
-parser.add_argument("--host", required=True,
- help="ip address of one of the CockroachDB nodes")
-parser.add_argument("--statement", required=True,
- help="statement to execute")
-parser.add_argument("--repeat", type=int,
- help="number of times to repeat the statement", default = 20)
-parser.add_argument("--times",
- help="print time for each repetition of the statement", action="store_true")
-parser.add_argument("--cumulative",
- help="print cumulative time for all repetitions of the statement", action="store_true")
-args = parser.parse_args()
-
-conn = psycopg2.connect(
- database='movr',
- user='root',
- host=args.host,
- port=26257
-)
-conn.set_session(autocommit=True)
-cur = conn.cursor()
-
-def median(lst):
- n = len(lst)
- if n < 1:
- return None
- if n % 2 == 1:
- return sorted(lst)[n//2]
- else:
- return sum(sorted(lst)[n//2-1:n//2+1])/2.0
-
-times = list()
-for n in range(args.repeat):
- start = time.time()
- statement = args.statement
- cur.execute(statement)
- if n < 1:
- if cur.description is not None:
- colnames = [desc[0] for desc in cur.description]
- print("")
- print("Result:")
- print(colnames)
- rows = cur.fetchall()
- for row in rows:
- print([str(cell) for cell in row])
- end = time.time()
- times.append((end - start)* 1000)
-
-cur.close()
-conn.close()
-
-print("")
-if args.times:
- print("Times (milliseconds):")
- print(times)
- print("")
-# print("Average time (milliseconds):")
-# print(float(sum(times))/len(times))
-# print("")
-print("Median time (milliseconds):")
-print(median(times))
-print("")
-if args.cumulative:
- print("Cumulative time (milliseconds):")
- print(sum(times))
- print("")
diff --git a/src/current/_includes/v2.1/prod-deployment/advertise-addr-join.md b/src/current/_includes/v2.1/prod-deployment/advertise-addr-join.md
deleted file mode 100644
index 67019d1fcea..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/advertise-addr-join.md
+++ /dev/null
@@ -1,4 +0,0 @@
-Flag | Description
------|------------
-`--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.
This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).
In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking).
-`--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.
diff --git a/src/current/_includes/v2.1/prod-deployment/backup.sh b/src/current/_includes/v2.1/prod-deployment/backup.sh
deleted file mode 100644
index c1a0bc3c5a6..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/backup.sh
+++ /dev/null
@@ -1,36 +0,0 @@
-#!/bin/bash
-
-set -euo pipefail
-
-# This script creates full backups when run on the configured
-# day of the week and incremental backups when run on other days, and tracks
-# recently created backups in a file to pass as the base for incremental backups.
-
-full_day="" # Must match (including case) the output of `LC_ALL=C date +%A`.
-what="DATABASE " # The name of the database you want to backup.
-base="/backups" # The URL where you want to store the backup.
-extra="" # Any additional parameters that need to be appended to the BACKUP URI e.g., AWS key params.
-recent=recent_backups.txt # File in which recent backups are tracked.
-backup_parameters= # e.g., "WITH revision_history"
-
-# Customize the `cockroach sql` command with `--host`, `--certs-dir` or `--insecure`, `--port`, and additional flags as needed to connect to the SQL client.
-runsql() { cockroach sql --insecure -e "$1"; }
-
-destination="${base}/$(date +"%Y%m%d-%H%M")${extra}"
-
-prev=
-while read -r line; do
- [[ "$prev" ]] && prev+=", "
- prev+="'$line'"
-done < "$recent"
-
-if [[ "$(LC_ALL=C date +%A)" = "$full_day" || ! "$prev" ]]; then
- runsql "BACKUP $what TO '$destination' AS OF SYSTEM TIME '-1m' $backup_parameters"
- echo "$destination" > "$recent"
-else
- destination="${base}/$(date +"%Y%m%d-%H%M")-inc${extra}"
- runsql "BACKUP $what TO '$destination' AS OF SYSTEM TIME '-1m' INCREMENTAL FROM $prev $backup_parameters"
- echo "$destination" >> "$recent"
-fi
-
-echo "backed up to ${destination}"
diff --git a/src/current/_includes/v2.1/prod-deployment/insecure-initialize-cluster.md b/src/current/_includes/v2.1/prod-deployment/insecure-initialize-cluster.md
deleted file mode 100644
index 5d1384c8467..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/insecure-initialize-cluster.md
+++ /dev/null
@@ -1,12 +0,0 @@
-On your local machine, complete the node startup process and have them join together as a cluster:
-
-1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already.
-
-2. Run the [`cockroach init`](initialize-a-cluster.html) command, with the `--host` flag set to the address of any node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach init --insecure --host=
- ~~~
-
- Each node then prints helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients.
diff --git a/src/current/_includes/v2.1/prod-deployment/insecure-recommendations.md b/src/current/_includes/v2.1/prod-deployment/insecure-recommendations.md
deleted file mode 100644
index e6f7fc0b9fe..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/insecure-recommendations.md
+++ /dev/null
@@ -1,15 +0,0 @@
-- If you plan to use CockroachDB in production, carefully review the [Production Checklist](recommended-production-settings.html).
-
-- Consider using a [secure cluster](manual-deployment.html) instead. Using an insecure cluster comes with risks:
- - Your cluster is open to any client that can access any node's IP addresses.
- - Any user, even `root`, can log in without providing a password.
- - Any user, connecting as `root`, can read or write any data in your cluster.
- - There is no network encryption or authentication, and thus no confidentiality.
-
-- Decide how you want to access your Admin UI:
-
- Access Level | Description
- -------------|------------
- Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`.
- Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`.
- Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the Admin UI.
diff --git a/src/current/_includes/v2.1/prod-deployment/insecure-requirements.md b/src/current/_includes/v2.1/prod-deployment/insecure-requirements.md
deleted file mode 100644
index 52640254763..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/insecure-requirements.md
+++ /dev/null
@@ -1,5 +0,0 @@
-- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries.
-
-- Your network configuration must allow TCP communication on the following ports:
- - `26257` for intra-cluster and client-cluster communication
- - `8080` to expose your Admin UI
diff --git a/src/current/_includes/v2.1/prod-deployment/insecure-scale-cluster.md b/src/current/_includes/v2.1/prod-deployment/insecure-scale-cluster.md
deleted file mode 100644
index bf74674761e..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/insecure-scale-cluster.md
+++ /dev/null
@@ -1,117 +0,0 @@
-You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/).
-
-
-
-
-
-
-
-
-
-For each additional node you want to add to the cluster, complete the following steps:
-
-1. SSH to the machine where you want the node to run.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start --insecure \
- --advertise-addr= \
- --locality= \
- --cache=.25 \
- --max-sql-memory=.25 \
- --join=,, \
- --background
- ~~~
-
-5. Update your load balancer to recognize the new node.
-
-
-
-
-
-For each additional node you want to add to the cluster, complete the following steps:
-
-1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Create the Cockroach directory:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mkdir /var/lib/cockroach
- ~~~
-
-5. Create a Unix user named `cockroach`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ useradd cockroach
- ~~~
-
-6. Change the ownership of `Cockroach` directory to the user `cockroach`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ chown cockroach /var/lib/cockroach
- ~~~
-
-7. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service
- ~~~
-
- Alternatively, you can create the file yourself and copy the script into it:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %}
- ~~~
-
- Save the file in the `/etc/systemd/system/` directory
-
-8. Customize the sample configuration template for your deployment:
-
- Specify values for the following flags in the sample configuration template:
-
- {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %}
-
-9. Repeat these steps for each additional node that you want in your cluster.
-
-
diff --git a/src/current/_includes/v2.1/prod-deployment/insecure-start-nodes.md b/src/current/_includes/v2.1/prod-deployment/insecure-start-nodes.md
deleted file mode 100644
index b67edfed311..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/insecure-start-nodes.md
+++ /dev/null
@@ -1,148 +0,0 @@
-You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/).
-
-
-
-
-
-
-
-
-
-For each initial node of your cluster, complete the following steps:
-
-{{site.data.alerts.callout_info}}
-After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.
-{{site.data.alerts.end}}
-
-1. SSH to the machine where you want the node to run.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Run the [`cockroach start`](start-a-node.html) command:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --insecure \
- --advertise-addr= \
- --join=,, \
- --cache=.25 \
- --max-sql-memory=.25 \
- --background
- ~~~
-
- This command primes the node to start, using the following flags:
-
- Flag | Description
- -----|------------
- `--insecure` | Indicates that the cluster is insecure, with no network encryption or authentication.
- `--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.
This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).
In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking).
- `--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.
- `--cache` `--max-sql-memory` | Increases the node's cache and temporary SQL memory size to 25% of available system memory to improve read performance and increase capacity for in-memory SQL processing. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size).
- `--background` | Starts the node in the background so you gain control of the terminal to issue more commands.
-
- When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](start-a-node.html#locality).
-
- For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds Admin UI HTTP requests to `--http-addr=localhost:8080`. To set these options manually, see [Start a Node](start-a-node.html).
-
-5. Repeat these steps for each additional node that you want in your cluster.
-
-
-
-
-
-For each initial node of your cluster, complete the following steps:
-
-{{site.data.alerts.callout_info}}After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.{{site.data.alerts.end}}
-
-1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Create the Cockroach directory:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mkdir /var/lib/cockroach
- ~~~
-
-5. Create a Unix user named `cockroach`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ useradd cockroach
- ~~~
-
-6. Change the ownership of `Cockroach` directory to the user `cockroach`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ chown cockroach /var/lib/cockroach
- ~~~
-
-7. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service) and save the file in the `/etc/systemd/system/` directory:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service
- ~~~
-
- Alternatively, you can create the file yourself and copy the script into it:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %}
- ~~~
-
-8. In the sample configuration template, specify values for the following flags:
-
- {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %}
-
- When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](start-a-node.html#locality).
-
- For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds Admin UI HTTP requests to `--http-port=8080`. To set these options manually, see [Start a Node](start-a-node.html).
-
-9. Start the CockroachDB cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ systemctl start insecurecockroachdb
- ~~~
-
-10. Repeat these steps for each additional node that you want in your cluster.
-
-{{site.data.alerts.callout_info}}
-`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop insecurecockroachdb`
-{{site.data.alerts.end}}
-
-
diff --git a/src/current/_includes/v2.1/prod-deployment/insecure-test-cluster.md b/src/current/_includes/v2.1/prod-deployment/insecure-test-cluster.md
deleted file mode 100644
index 307b8f999b9..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/insecure-test-cluster.md
+++ /dev/null
@@ -1,48 +0,0 @@
-CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster.
-
-To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows:
-
-1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of any node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --insecure --host=
- ~~~
-
-2. Create an `insecurenodetest` database:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE insecurenodetest;
- ~~~
-
-3. Use `\q` or `ctrl-d` to exit the SQL shell.
-
-4. Launch the built-in SQL client, with the `--host` flag set to the address of a different node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --insecure --host=
- ~~~
-
-5. View the cluster's databases, which will include `insecurenodetest`:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SHOW DATABASES;
- ~~~
-
- ~~~
- +--------------------+
- | Database |
- +--------------------+
- | crdb_internal |
- | information_schema |
- | insecurenodetest |
- | pg_catalog |
- | system |
- +--------------------+
- (5 rows)
- ~~~
-
-6. Use `\q` to exit the SQL shell.
diff --git a/src/current/_includes/v2.1/prod-deployment/insecure-test-load-balancing.md b/src/current/_includes/v2.1/prod-deployment/insecure-test-load-balancing.md
deleted file mode 100644
index e4369b54410..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/insecure-test-load-balancing.md
+++ /dev/null
@@ -1,41 +0,0 @@
-CockroachDB offers a pre-built `workload` binary for Linux that includes several load generators for simulating client traffic against your cluster. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload.
-
-{{site.data.alerts.callout_success}}For comprehensive guidance on benchmarking CockroachDB with TPC-C, see our Performance Benchmarking white paper.{{site.data.alerts.end}}
-
-1. SSH to the machine where you want the run the sample TPC-C workload.
-
- This should be a machine that is not running a CockroachDB node.
-
-2. Download `workload` and make it executable:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ wget https://edge-binaries.cockroachdb.com/cockroach/workload.LATEST ; chmod 755 workload.LATEST
- ~~~
-
-3. Rename and copy `workload` into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i workload.LATEST /usr/local/bin/workload
- ~~~
-
-4. Start the TPC-C workload, pointing it at the IP address of the load balancer:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ workload run tpcc \
- --drop \
- --init \
- --duration=20m \
- --tolerate-errors \
- "postgresql://root@tpcc options, use workload run tpcc --help. For details about other load generators included in workload, use workload run --help.
-
-4. To monitor the load generator's progress, open the [Admin UI](admin-ui-access-and-navigate.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup.
-
- Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes.
diff --git a/src/current/_includes/v2.1/prod-deployment/insecurecockroachdb.service b/src/current/_includes/v2.1/prod-deployment/insecurecockroachdb.service
deleted file mode 100644
index b027b941009..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/insecurecockroachdb.service
+++ /dev/null
@@ -1,16 +0,0 @@
-[Unit]
-Description=Cockroach Database cluster node
-Requires=network.target
-[Service]
-Type=notify
-WorkingDirectory=/var/lib/cockroach
-ExecStart=/usr/local/bin/cockroach start --insecure --advertise-addr= --join=,, --cache=.25 --max-sql-memory=.25
-TimeoutStopSec=60
-Restart=always
-RestartSec=10
-StandardOutput=syslog
-StandardError=syslog
-SyslogIdentifier=cockroach
-User=cockroach
-[Install]
-WantedBy=default.target
diff --git a/src/current/_includes/v2.1/prod-deployment/monitor-cluster.md b/src/current/_includes/v2.1/prod-deployment/monitor-cluster.md
deleted file mode 100644
index cb8185eac19..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/monitor-cluster.md
+++ /dev/null
@@ -1,3 +0,0 @@
-Despite CockroachDB's various [built-in safeguards against failure](high-availability.html), it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention.
-
-For details about available monitoring options and the most important events and metrics to alert on, see [Monitoring and Alerting](monitoring-and-alerting.html).
diff --git a/src/current/_includes/v2.1/prod-deployment/prod-see-also.md b/src/current/_includes/v2.1/prod-deployment/prod-see-also.md
deleted file mode 100644
index 9dc661f6dfc..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/prod-see-also.md
+++ /dev/null
@@ -1,7 +0,0 @@
-- [Production Checklist](recommended-production-settings.html)
-- [Manual Deployment](manual-deployment.html)
-- [Orchestrated Deployment](orchestration.html)
-- [Monitoring and Alerting](monitoring-and-alerting.html)
-- [Performance Benchmarking](performance-benchmarking-with-tpc-c.html)
-- [Performance Tuning](performance-tuning.html)
-- [Local Deployment](start-a-local-cluster.html)
diff --git a/src/current/_includes/v2.1/prod-deployment/secure-generate-certificates.md b/src/current/_includes/v2.1/prod-deployment/secure-generate-certificates.md
deleted file mode 100644
index c4d49062272..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/secure-generate-certificates.md
+++ /dev/null
@@ -1,148 +0,0 @@
-You can use either `cockroach cert` commands or [`openssl` commands](create-security-certificates-openssl.html) to generate security certificates. This section features the `cockroach cert` commands.
-
-Locally, you'll need to [create the following certificates and keys](create-security-certificates.html):
-
-- A certificate authority (CA) key pair (`ca.crt` and `ca.key`).
-- A node key pair for each node, issued to its IP addresses and any common names the machine uses, as well as to the IP addresses and common names for machines running load balancers.
-- A client key pair for the `root` user. You'll use this to run a sample workload against the cluster as well as some `cockroach` client commands from your local machine.
-
-{{site.data.alerts.callout_success}}Before beginning, it's useful to collect each of your machine's internal and external IP addresses, as well as any server names you want to issue certificates for.{{site.data.alerts.end}}
-
-1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already.
-
-2. Create two directories:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mkdir certs
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mkdir my-safe-directory
- ~~~
- - `certs`: You'll generate your CA certificate and all node and client certificates and keys in this directory and then upload some of the files to your nodes.
- - `my-safe-directory`: You'll generate your CA key in this directory and then reference the key when generating node and client certificates. After that, you'll keep the key safe and secret; you will not upload it to your nodes.
-
-3. Create the CA certificate and key:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-ca \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-4. Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-node \
- \
- \
- \
- \
- localhost \
- 127.0.0.1 \
- \
- \
- \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-5. Upload certificates to the first node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- # Create the certs directory:
- $ ssh @ "mkdir certs"
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- # Upload the CA certificate and node certificate and key:
- $ scp certs/ca.crt \
- certs/node.crt \
- certs/node.key \
- @:~/certs
- ~~~
-
-6. Delete the local copy of the node certificate and key:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ rm certs/node.crt certs/node.key
- ~~~
-
- {{site.data.alerts.callout_info}}This is necessary because the certificates and keys for additional nodes will also be named node.crt and node.key As an alternative to deleting these files, you can run the next cockroach cert create-node commands with the --overwrite flag.{{site.data.alerts.end}}
-
-7. Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-node \
- \
- \
- \
- \
- localhost \
- 127.0.0.1 \
- \
- \
- \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-8. Upload certificates to the second node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- # Create the certs directory:
- $ ssh @ "mkdir certs"
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- # Upload the CA certificate and node certificate and key:
- $ scp certs/ca.crt \
- certs/node.crt \
- certs/node.key \
- @:~/certs
- ~~~
-
-9. Repeat steps 6 - 8 for each additional node.
-
-10. Create a client certificate and key for the `root` user:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach cert create-client \
- root \
- --certs-dir=certs \
- --ca-key=my-safe-directory/ca.key
- ~~~
-
-11. Upload certificates to the machine where you will run a sample workload:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- # Create the certs directory:
- $ ssh @ "mkdir certs"
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- # Upload the CA certificate and client certificate and key:
- $ scp certs/ca.crt \
- certs/client.root.crt \
- certs/client.root.key \
- @:~/certs
- ~~~
-
- In later steps, you'll also use the `root` user's certificate to run [`cockroach`](cockroach-commands.html) client commands from your local machine. If you might also want to run `cockroach` client commands directly on a node (e.g., for local debugging), you'll need to copy the `root` user's certificate and key to that node as well.
-
-{{site.data.alerts.callout_info}}
-On accessing the Admin UI in a later step, your browser will consider the CockroachDB-created certificate invalid and you’ll need to click through a warning message to get to the UI. You can avoid this issue by [using a certificate issued by a public CA](create-security-certificates-custom-ca.html#accessing-the-admin-ui-for-a-secure-cluster).
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.1/prod-deployment/secure-initialize-cluster.md b/src/current/_includes/v2.1/prod-deployment/secure-initialize-cluster.md
deleted file mode 100644
index 0dc9b750307..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/secure-initialize-cluster.md
+++ /dev/null
@@ -1,8 +0,0 @@
-On your local machine, run the [`cockroach init`](initialize-a-cluster.html) command to complete the node startup process and have them join together as a cluster:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ cockroach init --certs-dir=certs --host=
-~~~
-
-After running this command, each node prints helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients.
diff --git a/src/current/_includes/v2.1/prod-deployment/secure-recommendations.md b/src/current/_includes/v2.1/prod-deployment/secure-recommendations.md
deleted file mode 100644
index 79d077ee84d..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/secure-recommendations.md
+++ /dev/null
@@ -1,9 +0,0 @@
-- If you plan to use CockroachDB in production, carefully review the [Production Checklist](recommended-production-settings.html).
-
-- Decide how you want to access your Admin UI:
-
- Access Level | Description
- -------------|------------
- Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`.
- Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`.
- Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the Admin UI.
diff --git a/src/current/_includes/v2.1/prod-deployment/secure-requirements.md b/src/current/_includes/v2.1/prod-deployment/secure-requirements.md
deleted file mode 100644
index f4a9beb1209..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/secure-requirements.md
+++ /dev/null
@@ -1,7 +0,0 @@
-- You must have [CockroachDB installed](install-cockroachdb.html) locally. This is necessary for generating and managing your deployment's certificates.
-
-- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries.
-
-- Your network configuration must allow TCP communication on the following ports:
- - `26257` for intra-cluster and client-cluster communication
- - `8080` to expose your Admin UI
diff --git a/src/current/_includes/v2.1/prod-deployment/secure-scale-cluster.md b/src/current/_includes/v2.1/prod-deployment/secure-scale-cluster.md
deleted file mode 100644
index 6c41ceb5f1f..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/secure-scale-cluster.md
+++ /dev/null
@@ -1,125 +0,0 @@
-You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/).
-
-
-
-
-
-
-
-
-
-For each additional node you want to add to the cluster, complete the following steps:
-
-1. SSH to the machine where you want the node to run.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Run the [`cockroach start`](start-a-node.html) command just like you did for the initial nodes:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --certs-dir=certs \
- --advertise-addr= \
- --locality= \
- --cache=.25 \
- --max-sql-memory=.25 \
- --join=,, \
- --background
- ~~~
-
-5. Update your load balancer to recognize the new node.
-
-
-
-
-
-For each additional node you want to add to the cluster, complete the following steps:
-
-1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Create the Cockroach directory:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mkdir /var/lib/cockroach
- ~~~
-
-5. Create a Unix user named `cockroach`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ useradd cockroach
- ~~~
-
-6. Move the `certs` directory to the `cockroach` directory.
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mv certs /var/lib/cockroach/
- ~~~
-
-7. Change the ownership of `Cockroach` directory to the user `cockroach`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ chown -R cockroach.cockroach /var/lib/cockroach
- ~~~
-
-8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service):
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service
- ~~~
-
- Alternatively, you can create the file yourself and copy the script into it:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %}
- ~~~
-
- Save the file in the `/etc/systemd/system/` directory.
-
-9. Customize the sample configuration template for your deployment:
-
- Specify values for the following flags in the sample configuration template:
-
- {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %}
-
-10. Repeat these steps for each additional node that you want in your cluster.
-
-
diff --git a/src/current/_includes/v2.1/prod-deployment/secure-start-nodes.md b/src/current/_includes/v2.1/prod-deployment/secure-start-nodes.md
deleted file mode 100644
index 6f50dc3d627..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/secure-start-nodes.md
+++ /dev/null
@@ -1,153 +0,0 @@
-You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/).
-
-
-
-
-
-
-
-
-
-For each initial node of your cluster, complete the following steps:
-
-{{site.data.alerts.callout_info}}After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.{{site.data.alerts.end}}
-
-1. SSH to the machine where you want the node to run.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Run the [`cockroach start`](start-a-node.html) command:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach start \
- --certs-dir=certs \
- --advertise-addr= \
- --join=,, \
- --cache=.25 \
- --max-sql-memory=.25 \
- --background
- ~~~
-
- This command primes the node to start, using the following flags:
-
- Flag | Description
- -----|------------
- `--certs-dir` | Specifies the directory where you placed the `ca.crt` file and the `node.crt` and `node.key` files for the node.
- `--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.
This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).
In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking).
- `--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising.
- `--cache` `--max-sql-memory` | Increases the node's cache and temporary SQL memory size to 25% of available system memory to improve read performance and increase capacity for in-memory SQL processing. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size).
- `--background` | Starts the node in the background so you gain control of the terminal to issue more commands.
-
- When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](start-a-node.html#locality).
-
- For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds Admin UI HTTP requests to `--http-addr=:8080`. To set these options manually, see [Start a Node](start-a-node.html).
-
-5. Repeat these steps for each additional node that you want in your cluster.
-
-
-
-
-
-For each initial node of your cluster, complete the following steps:
-
-{{site.data.alerts.callout_info}}After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.{{site.data.alerts.end}}
-
-1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user.
-
-2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \
- | tar -xz
- ~~~
-
-3. Copy the binary into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
- ~~~
-
- If you get a permissions error, prefix the command with `sudo`.
-
-4. Create the Cockroach directory:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mkdir /var/lib/cockroach
- ~~~
-
-5. Create a Unix user named `cockroach`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ useradd cockroach
- ~~~
-
-6. Move the `certs` directory to the `cockroach` directory.
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ mv certs /var/lib/cockroach/
- ~~~
-
-7. Change the ownership of `Cockroach` directory to the user `cockroach`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ chown -R cockroach.cockroach /var/lib/cockroach
- ~~~
-
-8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service) and save the file in the `/etc/systemd/system/` directory:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service
- ~~~
-
- Alternatively, you can create the file yourself and copy the script into it:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %}
- ~~~
-
-9. In the sample configuration template, specify values for the following flags:
-
- {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %}
-
- When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](start-a-node.html#locality).
-
- For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds Admin UI HTTP requests to `--http-addr=localhost:8080`. To set these options manually, see [Start a Node](start-a-node.html).
-
-10. Start the CockroachDB cluster:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ systemctl start securecockroachdb
- ~~~
-
-11. Repeat these steps for each additional node that you want in your cluster.
-
-{{site.data.alerts.callout_info}}
-`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop securecockroachdb`
-{{site.data.alerts.end}}
-
-
diff --git a/src/current/_includes/v2.1/prod-deployment/secure-test-cluster.md b/src/current/_includes/v2.1/prod-deployment/secure-test-cluster.md
deleted file mode 100644
index ba8b3370bb1..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/secure-test-cluster.md
+++ /dev/null
@@ -1,48 +0,0 @@
-CockroachDB replicates and distributes data for you behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster.
-
-To test this, use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows:
-
-1. On your local machine, launch the built-in SQL client:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --certs-dir=certs --host=
- ~~~
-
-2. Create a `securenodetest` database:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > CREATE DATABASE securenodetest;
- ~~~
-
-3. Use `\q` to exit the SQL shell.
-
-4. Launch the built-in SQL client against a different node:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cockroach sql --certs-dir=certs --host=
- ~~~
-
-5. View the cluster's databases, which will include `securenodetest`:
-
- {% include copy-clipboard.html %}
- ~~~ sql
- > SHOW DATABASES;
- ~~~
-
- ~~~
- +--------------------+
- | Database |
- +--------------------+
- | crdb_internal |
- | information_schema |
- | securenodetest |
- | pg_catalog |
- | system |
- +--------------------+
- (5 rows)
- ~~~
-
-6. Use `\q` to exit the SQL shell.
diff --git a/src/current/_includes/v2.1/prod-deployment/secure-test-load-balancing.md b/src/current/_includes/v2.1/prod-deployment/secure-test-load-balancing.md
deleted file mode 100644
index 85981cbec60..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/secure-test-load-balancing.md
+++ /dev/null
@@ -1,43 +0,0 @@
-CockroachDB offers a pre-built `workload` binary for Linux that includes several load generators for simulating client traffic against your cluster. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload.
-
-{{site.data.alerts.callout_success}}For comprehensive guidance on benchmarking CockroachDB with TPC-C, see our Performance Benchmarking white paper.{{site.data.alerts.end}}
-
-1. SSH to the machine where you want to run the sample TPC-C workload.
-
- This should be a machine that is not running a CockroachDB node, and it should already have a `certs` directory containing `ca.crt`, `client.root.crt`, and `client.root.key` files.
-
-2. Download `workload` and make it executable:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ wget https://edge-binaries.cockroachdb.com/cockroach/workload.LATEST ; chmod 755 workload.LATEST
- ~~~
-
-3. Rename and copy `workload` into the `PATH`:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ cp -i workload.LATEST /usr/local/bin/workload
- ~~~
-
-4. Start the TPC-C workload, pointing it at the IP address of the load balancer and the location of the `ca.crt`, `client.root.crt`, and `client.root.key` files:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ workload run tpcc \
- --drop \
- --init \
- --duration=20m \
- --tolerate-errors \
- "postgresql://root@tpcc options, use workload run tpcc --help. For details about other load generators included in workload, use workload run --help.
-
-5. To monitor the load generator's progress, open the [Admin UI](admin-ui-access-and-navigate.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup.
-
- For each user who should have access to the Admin UI for a secure cluster, [create a user with a password](create-user.html#create-a-user-with-a-password). On accessing the Admin UI, the users will see a Login screen, where they will need to enter their usernames and passwords.
-
- Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes.
diff --git a/src/current/_includes/v2.1/prod-deployment/securecockroachdb.service b/src/current/_includes/v2.1/prod-deployment/securecockroachdb.service
deleted file mode 100644
index 39054cf2e1d..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/securecockroachdb.service
+++ /dev/null
@@ -1,16 +0,0 @@
-[Unit]
-Description=Cockroach Database cluster node
-Requires=network.target
-[Service]
-Type=notify
-WorkingDirectory=/var/lib/cockroach
-ExecStart=/usr/local/bin/cockroach start --certs-dir=certs --advertise-addr= --join=,, --cache=.25 --max-sql-memory=.25
-TimeoutStopSec=60
-Restart=always
-RestartSec=10
-StandardOutput=syslog
-StandardError=syslog
-SyslogIdentifier=cockroach
-User=cockroach
-[Install]
-WantedBy=default.target
diff --git a/src/current/_includes/v2.1/prod-deployment/synchronize-clocks.md b/src/current/_includes/v2.1/prod-deployment/synchronize-clocks.md
deleted file mode 100644
index 5257e7a9640..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/synchronize-clocks.md
+++ /dev/null
@@ -1,173 +0,0 @@
-CockroachDB requires moderate levels of [clock synchronization](recommended-production-settings.html#clock-synchronization) to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. This avoids the risk of consistency anomalies, but it's best to prevent clocks from drifting too far in the first place by running clock synchronization software on each node.
-
-{% if page.title contains "Digital Ocean" or page.title contains "On-Premises" %}
-
-[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here, but other methods of clock synchronization are suitable as well.
-
-1. SSH to the first machine.
-
-2. Disable `timesyncd`, which tends to be active by default on some Linux distributions:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo timedatectl set-ntp no
- ~~~
-
- Verify that `timesyncd` is off:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ timedatectl
- ~~~
-
- Look for `Network time on: no` or `NTP enabled: no` in the output.
-
-3. Install the `ntp` package:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo apt-get install ntp
- ~~~
-
-4. Stop the NTP daemon:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo service ntp stop
- ~~~
-
-5. Sync the machine's clock with Google's NTP service:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo ntpd -b time.google.com
- ~~~
-
- To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines:
-
- {% include copy-clipboard.html %}
- ~~~
- server time1.google.com iburst
- server time2.google.com iburst
- server time3.google.com iburst
- server time4.google.com iburst
- ~~~
-
- Restart the NTP daemon:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo service ntp start
- ~~~
-
- {{site.data.alerts.callout_info}}We recommend Google's external NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine.{{site.data.alerts.end}}
-
-6. Verify that the machine is using a Google NTP server:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo ntpq -p
- ~~~
-
- The active NTP server will be marked with an asterisk.
-
-7. Repeat these steps for each machine where a CockroachDB node will run.
-
-{% elsif page.title contains "Google" %}
-
-Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should:
-
-- [Configure each GCE instances to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/configure-ntp#configure_ntp_for_your_instances).
-- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, [configure the non-GCE machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks).
-
-{% elsif page.title contains "AWS" %}
-
-Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html), which uses a fleet of satellite-connected and atomic reference clocks in each AWS Region to deliver accurate current time readings. The service also smears the leap second.
-
-- If you plan to run your entire cluster on AWS, [configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service).
-- However, if you plan to run a hybrid cluster across AWS and other cloud providers or environments, [configure all machines to use Google's external NTP service](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks), which is comparably accurate and also handles "smearing" the leap second.
-
-{% elsif page.title contains "Azure" %}
-
-[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here. However, to run `ntpd` properly on Azure VMs, it's necessary to first unbind the Time Synchronization device used by the Hyper-V technology running Azure VMs; this device aims to synchronize time between the VM and its host operating system but has been known to cause problems.
-
-1. SSH to the first machine.
-
-2. Find the ID of the Hyper-V Time Synchronization device:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ curl -O https://raw.githubusercontent.com/torvalds/linux/master/tools/hv/lsvmbus
- ~~~
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ python lsvmbus -vv | grep -w "Time Synchronization" -A 3
- ~~~
-
- ~~~
- VMBUS ID 12: Class_ID = {9527e630-d0ae-497b-adce-e80ab0175caf} - [Time Synchronization]
- Device_ID = {2dd1ce17-079e-403c-b352-a1921ee207ee}
- Sysfs path: /sys/bus/vmbus/devices/2dd1ce17-079e-403c-b352-a1921ee207ee
- Rel_ID=12, target_cpu=0
- ~~~
-
-3. Unbind the device, using the `Device_ID` from the previous command's output:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ echo | sudo tee /sys/bus/vmbus/drivers/hv_util/unbind
- ~~~
-
-4. Install the `ntp` package:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo apt-get install ntp
- ~~~
-
-5. Stop the NTP daemon:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo service ntp stop
- ~~~
-
-6. Sync the machine's clock with Google's NTP service:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo ntpd -b time.google.com
- ~~~
-
- To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines:
-
- {% include copy-clipboard.html %}
- ~~~
- server time1.google.com iburst
- server time2.google.com iburst
- server time3.google.com iburst
- server time4.google.com iburst
- ~~~
-
- Restart the NTP daemon:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo service ntp start
- ~~~
-
- {{site.data.alerts.callout_info}}We recommend Google's NTP service because they handle "smearing" the leap second. If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine.{{site.data.alerts.end}}
-
-7. Verify that the machine is using a Google NTP server:
-
- {% include copy-clipboard.html %}
- ~~~ shell
- $ sudo ntpq -p
- ~~~
-
- The active NTP server will be marked with an asterisk.
-
-8. Repeat these steps for each machine where a CockroachDB node will run.
-
-{% endif %}
diff --git a/src/current/_includes/v2.1/prod-deployment/use-cluster.md b/src/current/_includes/v2.1/prod-deployment/use-cluster.md
deleted file mode 100644
index 134f9fc6912..00000000000
--- a/src/current/_includes/v2.1/prod-deployment/use-cluster.md
+++ /dev/null
@@ -1,11 +0,0 @@
-Now that your deployment is working, you can:
-
-1. [Implement your data model](sql-statements.html).
-2. [Create users](create-and-manage-users.html) and [grant them privileges](grant.html).
-3. [Connect your application](install-client-drivers.html). Be sure to connect your application to the load balancer, not to a CockroachDB node.
-
-You may also want to adjust the way the cluster replicates data. For example, by default, a multi-node cluster replicates all data 3 times; you can change this replication factor or create additional rules for replicating individual databases and tables differently. For more information, see [Configure Replication Zones](configure-replication-zones.html).
-
-{{site.data.alerts.callout_danger}}
-When running a cluster of 5 nodes or more, it's safest to [increase the replication factor for important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) to 5, even if you do not do so for user data. For the cluster as a whole to remain available, the ranges for this internal data must always retain a majority of their replicas.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.1/sql/connection-parameters.md b/src/current/_includes/v2.1/sql/connection-parameters.md
deleted file mode 100644
index 0a0ad048ead..00000000000
--- a/src/current/_includes/v2.1/sql/connection-parameters.md
+++ /dev/null
@@ -1,8 +0,0 @@
-Flag | Description
------|------------
-`--host` | The server host and port number to connect to. This can be the address of any node in the cluster.
**Env Variable:** `COCKROACH_HOST` **Default:** `localhost:26257`
-`--port` `-p` | The server port to connect to. Note: The port number can also be specified via `--host`.
**Env Variable:** `COCKROACH_PORT` **Default:** `26257`
-`--user` `-u` | The [SQL user](create-and-manage-users.html) that will own the client session.
**Env Variable:** `COCKROACH_USER` **Default:** `root`
-`--insecure` | Use an insecure connection.
**Env Variable:** `COCKROACH_INSECURE` **Default:** `false`
-`--certs-dir` | The path to the [certificate directory](create-security-certificates.html) containing the CA and client certificates and client key.
**Env Variable:** `COCKROACH_CERTS_DIR` **Default:** `${HOME}/.cockroach-certs/`
- `--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments.
**Env Variable:** `COCKROACH_URL` **Default:** no URL
diff --git a/src/current/_includes/v2.1/sql/diagrams/add_column.html b/src/current/_includes/v2.1/sql/diagrams/add_column.html
deleted file mode 100644
index f59fd135d0e..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/add_column.html
+++ /dev/null
@@ -1,52 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/add_constraint.html b/src/current/_includes/v2.1/sql/diagrams/add_constraint.html
deleted file mode 100644
index a8f3b1c9c61..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/add_constraint.html
+++ /dev/null
@@ -1,38 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/alter_column.html b/src/current/_includes/v2.1/sql/diagrams/alter_column.html
deleted file mode 100644
index 773613a76e6..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/alter_column.html
+++ /dev/null
@@ -1,110 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/alter_sequence_options.html b/src/current/_includes/v2.1/sql/diagrams/alter_sequence_options.html
deleted file mode 100644
index ee56ccdaee6..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/alter_sequence_options.html
+++ /dev/null
@@ -1,63 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/alter_table_partition_by.html b/src/current/_includes/v2.1/sql/diagrams/alter_table_partition_by.html
deleted file mode 100644
index 073c8794394..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/alter_table_partition_by.html
+++ /dev/null
@@ -1,81 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/alter_type.html b/src/current/_includes/v2.1/sql/diagrams/alter_type.html
deleted file mode 100644
index ace962f3b99..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/alter_type.html
+++ /dev/null
@@ -1,45 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/alter_user_password.html b/src/current/_includes/v2.1/sql/diagrams/alter_user_password.html
deleted file mode 100644
index 0e014933d1b..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/alter_user_password.html
+++ /dev/null
@@ -1,31 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/alter_view.html b/src/current/_includes/v2.1/sql/diagrams/alter_view.html
deleted file mode 100644
index 2e481fa60aa..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/alter_view.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/alter_zone_database.html b/src/current/_includes/v2.1/sql/diagrams/alter_zone_database.html
deleted file mode 100644
index 8443e7d9f27..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/alter_zone_database.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/alter_zone_index.html b/src/current/_includes/v2.1/sql/diagrams/alter_zone_index.html
deleted file mode 100644
index 508a4cb0604..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/alter_zone_index.html
+++ /dev/null
@@ -1,41 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/alter_zone_range.html b/src/current/_includes/v2.1/sql/diagrams/alter_zone_range.html
deleted file mode 100644
index 69f084bdc9e..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/alter_zone_range.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/alter_zone_table.html b/src/current/_includes/v2.1/sql/diagrams/alter_zone_table.html
deleted file mode 100644
index bd165a30879..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/alter_zone_table.html
+++ /dev/null
@@ -1,44 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/backup.html b/src/current/_includes/v2.1/sql/diagrams/backup.html
deleted file mode 100644
index 1974cb5bcb0..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/backup.html
+++ /dev/null
@@ -1,73 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/begin_transaction.html b/src/current/_includes/v2.1/sql/diagrams/begin_transaction.html
deleted file mode 100644
index b859334c156..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/begin_transaction.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/cancel_job.html b/src/current/_includes/v2.1/sql/diagrams/cancel_job.html
deleted file mode 100644
index e8cbeb150fe..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/cancel_job.html
+++ /dev/null
@@ -1,24 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/cancel_query.html b/src/current/_includes/v2.1/sql/diagrams/cancel_query.html
deleted file mode 100644
index 612db072eb4..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/cancel_query.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/cancel_session.html b/src/current/_includes/v2.1/sql/diagrams/cancel_session.html
deleted file mode 100644
index 857f87adb18..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/cancel_session.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/check_column_level.html b/src/current/_includes/v2.1/sql/diagrams/check_column_level.html
deleted file mode 100644
index 59eec3e3c15..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/check_column_level.html
+++ /dev/null
@@ -1,70 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/check_table_level.html b/src/current/_includes/v2.1/sql/diagrams/check_table_level.html
deleted file mode 100644
index 6066d637220..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/check_table_level.html
+++ /dev/null
@@ -1,60 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/col_qualification.html b/src/current/_includes/v2.1/sql/diagrams/col_qualification.html
deleted file mode 100644
index 8b9b2d4fa1d..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/col_qualification.html
+++ /dev/null
@@ -1,132 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/column_def.html b/src/current/_includes/v2.1/sql/diagrams/column_def.html
deleted file mode 100644
index 284e8dc5838..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/column_def.html
+++ /dev/null
@@ -1,23 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/commit_transaction.html b/src/current/_includes/v2.1/sql/diagrams/commit_transaction.html
deleted file mode 100644
index 12914f3e1cb..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/commit_transaction.html
+++ /dev/null
@@ -1,17 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/create_changefeed.html b/src/current/_includes/v2.1/sql/diagrams/create_changefeed.html
deleted file mode 100644
index 82b77b8360e..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/create_changefeed.html
+++ /dev/null
@@ -1,46 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/create_database.html b/src/current/_includes/v2.1/sql/diagrams/create_database.html
deleted file mode 100644
index c621b08e138..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/create_database.html
+++ /dev/null
@@ -1,42 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/create_index.html b/src/current/_includes/v2.1/sql/diagrams/create_index.html
deleted file mode 100644
index dc0479dab14..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/create_index.html
+++ /dev/null
@@ -1,91 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/create_inverted_index.html b/src/current/_includes/v2.1/sql/diagrams/create_inverted_index.html
deleted file mode 100644
index 266281c12c1..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/create_inverted_index.html
+++ /dev/null
@@ -1,64 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/create_role.html b/src/current/_includes/v2.1/sql/diagrams/create_role.html
deleted file mode 100644
index 3c9c43dedf3..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/create_role.html
+++ /dev/null
@@ -1,28 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/create_sequence.html b/src/current/_includes/v2.1/sql/diagrams/create_sequence.html
deleted file mode 100644
index 4363cc0b087..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/create_sequence.html
+++ /dev/null
@@ -1,58 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/create_stats.html b/src/current/_includes/v2.1/sql/diagrams/create_stats.html
deleted file mode 100644
index 6180070d146..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/create_stats.html
+++ /dev/null
@@ -1,24 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/create_table.html b/src/current/_includes/v2.1/sql/diagrams/create_table.html
deleted file mode 100644
index 456c9f64ab7..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/create_table.html
+++ /dev/null
@@ -1,67 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/create_table_as.html b/src/current/_includes/v2.1/sql/diagrams/create_table_as.html
deleted file mode 100644
index dbf1028099a..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/create_table_as.html
+++ /dev/null
@@ -1,50 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/create_user.html b/src/current/_includes/v2.1/sql/diagrams/create_user.html
deleted file mode 100644
index 1dc78bb289a..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/create_user.html
+++ /dev/null
@@ -1,39 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/create_view.html b/src/current/_includes/v2.1/sql/diagrams/create_view.html
deleted file mode 100644
index 044db4c888c..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/create_view.html
+++ /dev/null
@@ -1,38 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/default_value_column_level.html b/src/current/_includes/v2.1/sql/diagrams/default_value_column_level.html
deleted file mode 100644
index 0ba9afca9c4..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/default_value_column_level.html
+++ /dev/null
@@ -1,64 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/delete.html b/src/current/_includes/v2.1/sql/diagrams/delete.html
deleted file mode 100644
index d79cbd6e082..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/delete.html
+++ /dev/null
@@ -1,66 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/drop_column.html b/src/current/_includes/v2.1/sql/diagrams/drop_column.html
deleted file mode 100644
index 384f5219d9d..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/drop_column.html
+++ /dev/null
@@ -1,43 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/drop_constraint.html b/src/current/_includes/v2.1/sql/diagrams/drop_constraint.html
deleted file mode 100644
index 77cea230ccd..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/drop_constraint.html
+++ /dev/null
@@ -1,45 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/drop_database.html b/src/current/_includes/v2.1/sql/diagrams/drop_database.html
deleted file mode 100644
index 038eb0befc1..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/drop_database.html
+++ /dev/null
@@ -1,31 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/drop_index.html b/src/current/_includes/v2.1/sql/diagrams/drop_index.html
deleted file mode 100644
index 2dd8b3636ee..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/drop_index.html
+++ /dev/null
@@ -1,31 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/drop_role.html b/src/current/_includes/v2.1/sql/diagrams/drop_role.html
deleted file mode 100644
index 0037ebf56ce..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/drop_role.html
+++ /dev/null
@@ -1,25 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/drop_sequence.html b/src/current/_includes/v2.1/sql/diagrams/drop_sequence.html
deleted file mode 100644
index 6507f7dec30..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/drop_sequence.html
+++ /dev/null
@@ -1,34 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/drop_table.html b/src/current/_includes/v2.1/sql/diagrams/drop_table.html
deleted file mode 100644
index 18ad4fdd502..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/drop_table.html
+++ /dev/null
@@ -1,34 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/drop_user.html b/src/current/_includes/v2.1/sql/diagrams/drop_user.html
deleted file mode 100644
index 57c3db991b9..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/drop_user.html
+++ /dev/null
@@ -1,28 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/drop_view.html b/src/current/_includes/v2.1/sql/diagrams/drop_view.html
deleted file mode 100644
index d95db116000..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/drop_view.html
+++ /dev/null
@@ -1,34 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/experimental_audit.html b/src/current/_includes/v2.1/sql/diagrams/experimental_audit.html
deleted file mode 100644
index 46cc527074a..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/experimental_audit.html
+++ /dev/null
@@ -1,39 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/explain.html b/src/current/_includes/v2.1/sql/diagrams/explain.html
deleted file mode 100644
index 61716ec485b..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/explain.html
+++ /dev/null
@@ -1,32 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/explain_analyze.html b/src/current/_includes/v2.1/sql/diagrams/explain_analyze.html
deleted file mode 100644
index e79e76f6fc0..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/explain_analyze.html
+++ /dev/null
@@ -1,23 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/export.html b/src/current/_includes/v2.1/sql/diagrams/export.html
deleted file mode 100644
index 05ad8e2a864..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/export.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/family_def.html b/src/current/_includes/v2.1/sql/diagrams/family_def.html
deleted file mode 100644
index 1dda01d9e79..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/family_def.html
+++ /dev/null
@@ -1,30 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/foreign_key_column_level.html b/src/current/_includes/v2.1/sql/diagrams/foreign_key_column_level.html
deleted file mode 100644
index a963e586425..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/foreign_key_column_level.html
+++ /dev/null
@@ -1,75 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/foreign_key_table_level.html b/src/current/_includes/v2.1/sql/diagrams/foreign_key_table_level.html
deleted file mode 100644
index 2eb3498af46..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/foreign_key_table_level.html
+++ /dev/null
@@ -1,85 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/grant_privileges.html b/src/current/_includes/v2.1/sql/diagrams/grant_privileges.html
deleted file mode 100644
index da7f44e5160..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/grant_privileges.html
+++ /dev/null
@@ -1,74 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/grant_roles.html b/src/current/_includes/v2.1/sql/diagrams/grant_roles.html
deleted file mode 100644
index f8eee0dc766..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/grant_roles.html
+++ /dev/null
@@ -1,34 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/import_csv.html b/src/current/_includes/v2.1/sql/diagrams/import_csv.html
deleted file mode 100644
index ad4f863f5ab..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/import_csv.html
+++ /dev/null
@@ -1,52 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/import_dump.html b/src/current/_includes/v2.1/sql/diagrams/import_dump.html
deleted file mode 100644
index 1c94207f03e..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/import_dump.html
+++ /dev/null
@@ -1,27 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/index_def.html b/src/current/_includes/v2.1/sql/diagrams/index_def.html
deleted file mode 100644
index 7808b2e4800..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/index_def.html
+++ /dev/null
@@ -1,85 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/insert.html b/src/current/_includes/v2.1/sql/diagrams/insert.html
deleted file mode 100644
index 81576677379..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/insert.html
+++ /dev/null
@@ -1,81 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/interleave.html b/src/current/_includes/v2.1/sql/diagrams/interleave.html
deleted file mode 100644
index 09bb9c35b5b..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/interleave.html
+++ /dev/null
@@ -1,69 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/joined_table.html b/src/current/_includes/v2.1/sql/diagrams/joined_table.html
deleted file mode 100644
index 68b66314702..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/joined_table.html
+++ /dev/null
@@ -1,100 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/limit_clause.html b/src/current/_includes/v2.1/sql/diagrams/limit_clause.html
deleted file mode 100644
index 98d5114a88e..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/limit_clause.html
+++ /dev/null
@@ -1,38 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/not_null_column_level.html b/src/current/_includes/v2.1/sql/diagrams/not_null_column_level.html
deleted file mode 100644
index 52e17e9d57d..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/not_null_column_level.html
+++ /dev/null
@@ -1,59 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/offset_clause.html b/src/current/_includes/v2.1/sql/diagrams/offset_clause.html
deleted file mode 100644
index d6dc4873ee5..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/offset_clause.html
+++ /dev/null
@@ -1,26 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/on_conflict.html b/src/current/_includes/v2.1/sql/diagrams/on_conflict.html
deleted file mode 100644
index 7a64a45547b..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/on_conflict.html
+++ /dev/null
@@ -1,107 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/opt_interleave.html b/src/current/_includes/v2.1/sql/diagrams/opt_interleave.html
deleted file mode 100644
index 5825c01b310..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/opt_interleave.html
+++ /dev/null
@@ -1,33 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/pause_job.html b/src/current/_includes/v2.1/sql/diagrams/pause_job.html
deleted file mode 100644
index 3d0949c6088..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/pause_job.html
+++ /dev/null
@@ -1,24 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/primary_key_column_level.html b/src/current/_includes/v2.1/sql/diagrams/primary_key_column_level.html
deleted file mode 100644
index f938b641654..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/primary_key_column_level.html
+++ /dev/null
@@ -1,59 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/primary_key_table_level.html b/src/current/_includes/v2.1/sql/diagrams/primary_key_table_level.html
deleted file mode 100644
index db8ece49c39..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/primary_key_table_level.html
+++ /dev/null
@@ -1,63 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/release_savepoint.html b/src/current/_includes/v2.1/sql/diagrams/release_savepoint.html
deleted file mode 100644
index 194ce6573ca..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/release_savepoint.html
+++ /dev/null
@@ -1,19 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/rename_column.html b/src/current/_includes/v2.1/sql/diagrams/rename_column.html
deleted file mode 100644
index 2d275bc9de7..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/rename_column.html
+++ /dev/null
@@ -1,44 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/rename_database.html b/src/current/_includes/v2.1/sql/diagrams/rename_database.html
deleted file mode 100644
index ce9ddd3ddba..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/rename_database.html
+++ /dev/null
@@ -1,30 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/rename_index.html b/src/current/_includes/v2.1/sql/diagrams/rename_index.html
deleted file mode 100644
index 82ed2e90255..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/rename_index.html
+++ /dev/null
@@ -1,33 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/rename_sequence.html b/src/current/_includes/v2.1/sql/diagrams/rename_sequence.html
deleted file mode 100644
index a564d9db425..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/rename_sequence.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/rename_table.html b/src/current/_includes/v2.1/sql/diagrams/rename_table.html
deleted file mode 100644
index 316c56482eb..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/rename_table.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/reset_csetting.html b/src/current/_includes/v2.1/sql/diagrams/reset_csetting.html
deleted file mode 100644
index 49e120ffc69..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/reset_csetting.html
+++ /dev/null
@@ -1,22 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/reset_session.html b/src/current/_includes/v2.1/sql/diagrams/reset_session.html
deleted file mode 100644
index 0a47ec52d49..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/reset_session.html
+++ /dev/null
@@ -1,19 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/restore.html b/src/current/_includes/v2.1/sql/diagrams/restore.html
deleted file mode 100644
index 4aec1b4819f..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/restore.html
+++ /dev/null
@@ -1,67 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/resume_job.html b/src/current/_includes/v2.1/sql/diagrams/resume_job.html
deleted file mode 100644
index 552bef86bce..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/resume_job.html
+++ /dev/null
@@ -1,24 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/revoke_privileges.html b/src/current/_includes/v2.1/sql/diagrams/revoke_privileges.html
deleted file mode 100644
index a6f9a1dee8e..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/revoke_privileges.html
+++ /dev/null
@@ -1,74 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/revoke_roles.html b/src/current/_includes/v2.1/sql/diagrams/revoke_roles.html
deleted file mode 100644
index a30aee75474..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/revoke_roles.html
+++ /dev/null
@@ -1,34 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/rollback_transaction.html b/src/current/_includes/v2.1/sql/diagrams/rollback_transaction.html
deleted file mode 100644
index c34d5d12047..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/rollback_transaction.html
+++ /dev/null
@@ -1,22 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/savepoint.html b/src/current/_includes/v2.1/sql/diagrams/savepoint.html
deleted file mode 100644
index 9b7dc70608b..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/savepoint.html
+++ /dev/null
@@ -1,16 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/select.html b/src/current/_includes/v2.1/sql/diagrams/select.html
deleted file mode 100644
index 9f743234e06..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/select.html
+++ /dev/null
@@ -1,38 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/select_clause.html b/src/current/_includes/v2.1/sql/diagrams/select_clause.html
deleted file mode 100644
index 88dc35507df..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/select_clause.html
+++ /dev/null
@@ -1,53 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/set_cluster_setting.html b/src/current/_includes/v2.1/sql/diagrams/set_cluster_setting.html
deleted file mode 100644
index b6554c7be52..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/set_cluster_setting.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/set_operation.html b/src/current/_includes/v2.1/sql/diagrams/set_operation.html
deleted file mode 100644
index aa0e63023dc..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/set_operation.html
+++ /dev/null
@@ -1,32 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/set_transaction.html b/src/current/_includes/v2.1/sql/diagrams/set_transaction.html
deleted file mode 100644
index 5946f58214a..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/set_transaction.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/set_var.html b/src/current/_includes/v2.1/sql/diagrams/set_var.html
deleted file mode 100644
index 96bb04e7cf6..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/set_var.html
+++ /dev/null
@@ -1,33 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_backup.html b/src/current/_includes/v2.1/sql/diagrams/show_backup.html
deleted file mode 100644
index 0f4f4e2c379..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_backup.html
+++ /dev/null
@@ -1,19 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_cluster_setting.html b/src/current/_includes/v2.1/sql/diagrams/show_cluster_setting.html
deleted file mode 100644
index d575106689f..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_cluster_setting.html
+++ /dev/null
@@ -1,34 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_columns.html b/src/current/_includes/v2.1/sql/diagrams/show_columns.html
deleted file mode 100644
index 7b47a3b3123..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_columns.html
+++ /dev/null
@@ -1,22 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_constraints.html b/src/current/_includes/v2.1/sql/diagrams/show_constraints.html
deleted file mode 100644
index 9c520ae9bc6..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_constraints.html
+++ /dev/null
@@ -1,25 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_create.html b/src/current/_includes/v2.1/sql/diagrams/show_create.html
deleted file mode 100644
index 09c0fa4c2a1..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_create.html
+++ /dev/null
@@ -1,16 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_databases.html b/src/current/_includes/v2.1/sql/diagrams/show_databases.html
deleted file mode 100644
index 487bfc4e629..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_databases.html
+++ /dev/null
@@ -1,14 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_grants.html b/src/current/_includes/v2.1/sql/diagrams/show_grants.html
deleted file mode 100644
index 92a7932dc22..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_grants.html
+++ /dev/null
@@ -1,61 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_index.html b/src/current/_includes/v2.1/sql/diagrams/show_index.html
deleted file mode 100644
index 3014183c521..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_index.html
+++ /dev/null
@@ -1,28 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_jobs.html b/src/current/_includes/v2.1/sql/diagrams/show_jobs.html
deleted file mode 100644
index b59d4d176d0..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_jobs.html
+++ /dev/null
@@ -1,14 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_queries.html b/src/current/_includes/v2.1/sql/diagrams/show_queries.html
deleted file mode 100644
index 26376243dac..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_queries.html
+++ /dev/null
@@ -1,20 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_ranges.html b/src/current/_includes/v2.1/sql/diagrams/show_ranges.html
deleted file mode 100644
index 268530ff8f4..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_ranges.html
+++ /dev/null
@@ -1,32 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_roles.html b/src/current/_includes/v2.1/sql/diagrams/show_roles.html
deleted file mode 100644
index fd508395e0b..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_roles.html
+++ /dev/null
@@ -1,14 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_schemas.html b/src/current/_includes/v2.1/sql/diagrams/show_schemas.html
deleted file mode 100644
index efa07764533..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_schemas.html
+++ /dev/null
@@ -1,22 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_sessions.html b/src/current/_includes/v2.1/sql/diagrams/show_sessions.html
deleted file mode 100644
index 3b2aa5b16ee..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_sessions.html
+++ /dev/null
@@ -1,20 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_stats.html b/src/current/_includes/v2.1/sql/diagrams/show_stats.html
deleted file mode 100644
index 0e350b93c0f..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_stats.html
+++ /dev/null
@@ -1,20 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_tables.html b/src/current/_includes/v2.1/sql/diagrams/show_tables.html
deleted file mode 100644
index 570e6222172..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_tables.html
+++ /dev/null
@@ -1,22 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_trace.html b/src/current/_includes/v2.1/sql/diagrams/show_trace.html
deleted file mode 100644
index 37271dc87b5..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_trace.html
+++ /dev/null
@@ -1,25 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_users.html b/src/current/_includes/v2.1/sql/diagrams/show_users.html
deleted file mode 100644
index 7c33b7f00b4..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_users.html
+++ /dev/null
@@ -1,14 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_var.html b/src/current/_includes/v2.1/sql/diagrams/show_var.html
deleted file mode 100644
index fb7ec6f4ce8..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_var.html
+++ /dev/null
@@ -1,20 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/show_zone.html b/src/current/_includes/v2.1/sql/diagrams/show_zone.html
deleted file mode 100644
index 83052dd1d5c..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/show_zone.html
+++ /dev/null
@@ -1,73 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/simple_select_clause.html b/src/current/_includes/v2.1/sql/diagrams/simple_select_clause.html
deleted file mode 100644
index 4f91c71493a..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/simple_select_clause.html
+++ /dev/null
@@ -1,107 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/sort_clause.html b/src/current/_includes/v2.1/sql/diagrams/sort_clause.html
deleted file mode 100644
index dbac057629e..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/sort_clause.html
+++ /dev/null
@@ -1,55 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/split_index_at.html b/src/current/_includes/v2.1/sql/diagrams/split_index_at.html
deleted file mode 100644
index 51daee7e3c7..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/split_index_at.html
+++ /dev/null
@@ -1,35 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/split_table_at.html b/src/current/_includes/v2.1/sql/diagrams/split_table_at.html
deleted file mode 100644
index a694595b9b5..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/split_table_at.html
+++ /dev/null
@@ -1,30 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/stmt_block.html b/src/current/_includes/v2.1/sql/diagrams/stmt_block.html
deleted file mode 100644
index aa72ece6dce..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/stmt_block.html
+++ /dev/null
@@ -1,11071 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/table_clause.html b/src/current/_includes/v2.1/sql/diagrams/table_clause.html
deleted file mode 100644
index 97691481d76..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/table_clause.html
+++ /dev/null
@@ -1,15 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/table_constraint.html b/src/current/_includes/v2.1/sql/diagrams/table_constraint.html
deleted file mode 100644
index ac37f0f1eac..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/table_constraint.html
+++ /dev/null
@@ -1,120 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/table_ref.html b/src/current/_includes/v2.1/sql/diagrams/table_ref.html
deleted file mode 100644
index 0010ffa90f8..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/table_ref.html
+++ /dev/null
@@ -1,59 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/truncate.html b/src/current/_includes/v2.1/sql/diagrams/truncate.html
deleted file mode 100644
index 06cb91a310c..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/truncate.html
+++ /dev/null
@@ -1,28 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/unique_column_level.html b/src/current/_includes/v2.1/sql/diagrams/unique_column_level.html
deleted file mode 100644
index c7c178e9351..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/unique_column_level.html
+++ /dev/null
@@ -1,59 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/unique_table_level.html b/src/current/_includes/v2.1/sql/diagrams/unique_table_level.html
deleted file mode 100644
index e77a972161a..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/unique_table_level.html
+++ /dev/null
@@ -1,63 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/update.html b/src/current/_includes/v2.1/sql/diagrams/update.html
deleted file mode 100644
index 7ead70594b4..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/update.html
+++ /dev/null
@@ -1,118 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/upsert.html b/src/current/_includes/v2.1/sql/diagrams/upsert.html
deleted file mode 100644
index b4d7987ddfe..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/upsert.html
+++ /dev/null
@@ -1,71 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/validate_constraint.html b/src/current/_includes/v2.1/sql/diagrams/validate_constraint.html
deleted file mode 100644
index d470d8dd98f..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/validate_constraint.html
+++ /dev/null
@@ -1,36 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/_includes/v2.1/sql/diagrams/values_clause.html b/src/current/_includes/v2.1/sql/diagrams/values_clause.html
deleted file mode 100644
index 34f78e982b4..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/values_clause.html
+++ /dev/null
@@ -1,27 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/diagrams/with_clause.html b/src/current/_includes/v2.1/sql/diagrams/with_clause.html
deleted file mode 100644
index 0f746306ae3..00000000000
--- a/src/current/_includes/v2.1/sql/diagrams/with_clause.html
+++ /dev/null
@@ -1,71 +0,0 @@
-
diff --git a/src/current/_includes/v2.1/sql/function-special-forms.md b/src/current/_includes/v2.1/sql/function-special-forms.md
deleted file mode 100644
index bb4b06bbe39..00000000000
--- a/src/current/_includes/v2.1/sql/function-special-forms.md
+++ /dev/null
@@ -1,27 +0,0 @@
-| Special form | Equivalent to |
-|-----------------------------------------------------------|---------------------------------------------|
-| `CURRENT_CATALOG` | `current_catalog()` |
-| `CURRENT_DATE` | `current_date()` |
-| `CURRENT_ROLE` | `current_user()` |
-| `CURRENT_SCHEMA` | `current_schema()` |
-| `CURRENT_TIMESTAMP` | `current_timestamp()` |
-| `CURRENT_TIME` | `current_time()` |
-| `CURRENT_USER` | `current_user()` |
-| `EXTRACT( FROM )` | `extract("", )` |
-| `EXTRACT_DURATION( FROM )` | `extract_duration("", )` |
-| `OVERLAY( PLACING FROM FOR )` | `overlay(, , , )` |
-| `OVERLAY( PLACING FROM )` | `overlay(, , )` |
-| `POSITION( IN )` | `strpos(, )` |
-| `SESSION_USER` | `current_user()` |
-| `SUBSTRING( FOR FROM )` | `substring(, , )` |
-| `SUBSTRING( FOR )` | `substring(, 1, )` |
-| `SUBSTRING( FROM FOR )` | `substring(, , )` |
-| `SUBSTRING( FROM )` | `substring(, )` |
-| `TRIM( FROM )` | `btrim(, )` |
-| `TRIM(, )` | `btrim(, )` |
-| `TRIM(FROM )` | `btrim()` |
-| `TRIM(LEADING FROM )` | `ltrim(, )` |
-| `TRIM(LEADING FROM )` | `ltrim()` |
-| `TRIM(TRAILING FROM )` | `rtrim(, )` |
-| `TRIM(TRAILING FROM )` | `rtrim()` |
-| `USER` | `current_user()` |
diff --git a/src/current/_includes/v2.1/start-in-docker/mac-linux-steps.md b/src/current/_includes/v2.1/start-in-docker/mac-linux-steps.md
deleted file mode 100644
index c2f8ef608df..00000000000
--- a/src/current/_includes/v2.1/start-in-docker/mac-linux-steps.md
+++ /dev/null
@@ -1,148 +0,0 @@
-## Before you begin
-
-If you have not already installed the official CockroachDB Docker image, go to [Install CockroachDB](install-cockroachdb.html) and follow the instructions under **Use Docker**.
-
-## Step 1. Create a bridge network
-
-Since you'll be running multiple Docker containers on a single host, with one CockroachDB node per container, you need to create what Docker refers to as a [bridge network](https://docs.docker.com/engine/userguide/networking/#/a-bridge-network). The bridge network will enable the containers to communicate as a single cluster while keeping them isolated from external networks.
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ docker network create -d bridge roachnet
-~~~
-
-We've used `roachnet` as the network name here and in subsequent steps, but feel free to give your network any name you like.
-
-## Step 2. Start the first node
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ docker run -d \
---name=roach1 \
---hostname=roach1 \
---net=roachnet \
--p 26257:26257 -p 8080:8080 \
--v "${PWD}/cockroach-data/roach1:/cockroach/cockroach-data" \
-{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure
-~~~
-
-This command creates a container and starts the first CockroachDB node inside it. Let's look at each part:
-
-- `docker run`: The Docker command to start a new container.
-- `-d`: This flag runs the container in the background so you can continue the next steps in the same shell.
-- `--name`: The name for the container. This is optional, but a custom name makes it significantly easier to reference the container in other commands, for example, when opening a Bash session in the container or stopping the container.
-- `--hostname`: The hostname for the container. You will use this to join other containers/nodes to the cluster.
-- `--net`: The bridge network for the container to join. See step 1 for more details.
-- `-p 26257:26257 -p 8080:8080`: These flags map the default port for inter-node and client-node communication (`26257`) and the default port for HTTP requests to the Admin UI (`8080`) from the container to the host. This enables inter-container communication and makes it possible to call up the Admin UI from a browser.
-- `-v "${PWD}/cockroach-data/roach1:/cockroach/cockroach-data"`: This flag mounts a host directory as a data volume. This means that data and logs for this node will be stored in `${PWD}/cockroach-data/roach1` on the host and will persist after the container is stopped or deleted. For more details, see Docker's Bind Mounts topic.
-- `{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure`: The CockroachDB command to [start a node](start-a-node.html) in the container in insecure mode.
-
-## Step 3. Add nodes to the cluster
-
-At this point, your cluster is live and operational. With just one node, you can already connect a SQL client and start building out your database. In real deployments, however, you'll always want 3 or more nodes to take advantage of CockroachDB's [automatic replication](demo-data-replication.html), [rebalancing](demo-automatic-rebalancing.html), and [fault tolerance](demo-fault-tolerance-and-recovery.html) capabilities.
-
-To simulate a real deployment, scale your cluster by adding two more nodes:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ docker run -d \
---name=roach2 \
---hostname=roach2 \
---net=roachnet \
--v "${PWD}/cockroach-data/roach2:/cockroach/cockroach-data" \
-{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure --join=roach1
-~~~
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ docker run -d \
---name=roach3 \
---hostname=roach3 \
---net=roachnet \
--v "${PWD}/cockroach-data/roach3:/cockroach/cockroach-data" \
-{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure --join=roach1
-~~~
-
-These commands add two more containers and start CockroachDB nodes inside them, joining them to the first node. There are only a few differences to note from step 2:
-
-- `-v`: This flag mounts a host directory as a data volume. Data and logs for these nodes will be stored in `${PWD}/cockroach-data/roach2` and `${PWD}/cockroach-data/roach3` on the host and will persist after the containers are stopped or deleted.
-- `--join`: This flag joins the new nodes to the cluster, using the first container's `hostname`. Otherwise, all [`cockroach start`](start-a-node.html) defaults are accepted. Note that since each node is in a unique container, using identical default ports won’t cause conflicts.
-
-## Step 4. Test the cluster
-
-Now that you've scaled to 3 nodes, you can use any node as a SQL gateway to the cluster. To demonstrate this, use the `docker exec` command to start the [built-in SQL shell](use-the-built-in-sql-client.html) in the first container:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ docker exec -it roach1 ./cockroach sql --insecure
-~~~
-
-Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE DATABASE bank;
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO bank.accounts VALUES (1, 1000.50);
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM bank.accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 1000.5 |
-+----+---------+
-(1 row)
-~~~
-
-Exit the SQL shell on node 1:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> \q
-~~~
-
-Then start the SQL shell in the second container:
-
-{% include copy-clipboard.html %}
-~~~ shell
-$ docker exec -it roach2 ./cockroach sql --insecure
-~~~
-
-Now run the same `SELECT` query:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM bank.accounts;
-~~~
-
-~~~
-+----+---------+
-| id | balance |
-+----+---------+
-| 1 | 1000.5 |
-+----+---------+
-(1 row)
-~~~
-
-As you can see, node 1 and node 2 behaved identically as SQL gateways.
-
-When you're done, exit the SQL shell on node 2:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> \q
-~~~
diff --git a/src/current/_includes/v2.1/zone-configs/constrain-leaseholders-to-specific-datacenters.md b/src/current/_includes/v2.1/zone-configs/constrain-leaseholders-to-specific-datacenters.md
deleted file mode 100644
index 58f06d6c0e7..00000000000
--- a/src/current/_includes/v2.1/zone-configs/constrain-leaseholders-to-specific-datacenters.md
+++ /dev/null
@@ -1,32 +0,0 @@
-In addition to [constraining replicas to specific datacenters](configure-replication-zones.html#per-replica-constraints-to-specific-datacenters), you may also specify preferences for where the range's leaseholders should be placed. This can result in increased performance in some scenarios.
-
-The [`ALTER TABLE ... CONFIGURE ZONE`](configure-zone.html) statement below requires that the cluster try to place the ranges' leaseholders in zone `us-east-1b`; if that is not possible, it will try to place them in zone `us-east-1a`.
-
-For more information about how the `lease_preferences` field works, see its description in the [Replication zone variables](configure-replication-zones.html#replication-zone-variables) section.
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE kv CONFIGURE ZONE USING num_replicas = 3, constraints = '{"+zone=us-east-1a": 1, "+zone=us-east-1b": 1}', lease_preferences = '[[+zone=us-east-1b], [+zone=us-east-1a]]';
-~~~
-
-~~~
-CONFIGURE ZONE 1
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW ZONE CONFIGURATION FOR TABLE kv;
-~~~
-
-~~~
- zone_name | config_sql
------------+------------------------------------------------------------------------
- test.kv | ALTER TABLE kv CONFIGURE ZONE USING +
- | range_min_bytes = 1048576, +
- | range_max_bytes = 67108864, +
- | gc.ttlseconds = 90000, +
- | num_replicas = 3, +
- | constraints = '{+zone=us-east-1a: 1, +zone=us-east-1b: 1}', +
- | lease_preferences = '[[+zone=us-east-1b], [+zone=us-east-1a]]'
-(1 row)
-~~~
diff --git a/src/current/_includes/v2.1/zone-configs/create-a-replication-zone-for-a-database.md b/src/current/_includes/v2.1/zone-configs/create-a-replication-zone-for-a-database.md
deleted file mode 100644
index b5e5b3e9347..00000000000
--- a/src/current/_includes/v2.1/zone-configs/create-a-replication-zone-for-a-database.md
+++ /dev/null
@@ -1,28 +0,0 @@
-To control replication for a specific database, use the `ALTER DATABASE ... CONFIGURE ZONE` statement to define the values you want to change (other values will not be affected):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER DATABASE test CONFIGURE ZONE USING num_replicas = 5, gc.ttlseconds = 100000;
-~~~
-
-~~~
-CONFIGURE ZONE 1
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW ZONE CONFIGURATION FOR DATABASE test;
-~~~
-
-~~~
- zone_name | config_sql
-+-----------+------------------------------------------+
- test | ALTER DATABASE test CONFIGURE ZONE USING
- | range_min_bytes = 1048576,
- | range_max_bytes = 67108864,
- | gc.ttlseconds = 100000,
- | num_replicas = 5,
- | constraints = '[]',
- | lease_preferences = '[]'
-(1 row)
-~~~
diff --git a/src/current/_includes/v2.1/zone-configs/create-a-replication-zone-for-a-secondary-index.md b/src/current/_includes/v2.1/zone-configs/create-a-replication-zone-for-a-secondary-index.md
deleted file mode 100644
index c8d1374086e..00000000000
--- a/src/current/_includes/v2.1/zone-configs/create-a-replication-zone-for-a-secondary-index.md
+++ /dev/null
@@ -1,38 +0,0 @@
-{{site.data.alerts.callout_info}}
-This is an [enterprise-only](enterprise-licensing.html) feature.
-{{site.data.alerts.end}}
-
-The [secondary indexes](indexes.html) on a table will automatically use the replication zone for the table. However, with an enterprise license, you can add distinct replication zones for secondary indexes.
-
-To control replication for a specific secondary index, use the `ALTER INDEX ... CONFIGURE ZONE` statement to define the values you want to change (other values will not be affected).
-
-{{site.data.alerts.callout_success}}
-To get the name of a secondary index, which you need for the `CONFIGURE ZONE` statement, use the [`SHOW INDEX`](show-index.html) or [`SHOW CREATE TABLE`](show-create.html) statements.
-{{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER INDEX tpch.frequent_customers CONFIGURE ZONE USING num_replicas = 5, gc.ttlseconds = 100000;
-~~~
-
-~~~
-CONFIGURE ZONE 1
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW ZONE CONFIGURATION FOR INDEX tpch.customer@frequent_customers;
-~~~
-
-~~~
- zone_name | config_sql
-+----------------------------------+--------------------------------------------------------------------------+
- tpch.customer@frequent_customers | ALTER INDEX tpch.public.customer@frequent_customers CONFIGURE ZONE USING
- | range_min_bytes = 1048576,
- | range_max_bytes = 67108864,
- | gc.ttlseconds = 100000,
- | num_replicas = 5,
- | constraints = '[]',
- | lease_preferences = '[]'
-(1 row)
-~~~
diff --git a/src/current/_includes/v2.1/zone-configs/create-a-replication-zone-for-a-system-range.md b/src/current/_includes/v2.1/zone-configs/create-a-replication-zone-for-a-system-range.md
deleted file mode 100644
index 1222f7cdc7c..00000000000
--- a/src/current/_includes/v2.1/zone-configs/create-a-replication-zone-for-a-system-range.md
+++ /dev/null
@@ -1,41 +0,0 @@
-In addition to the databases and tables that are visible via the SQL interface, CockroachDB stores internal data in what are called system ranges. CockroachDB comes with pre-configured replication zones for some of these ranges:
-
-Zone Name | Description
-----------|-----------------------------
-`.meta` | The "meta" ranges contain the authoritative information about the location of all data in the cluster.
These ranges must retain a majority of replicas for the cluster as a whole to remain available and historical queries are never run on them, so CockroachDB comes with a **pre-configured** `.meta` replication zone with `num_replicas` set to 5 to make these ranges more resilient to node failure and a lower-than-default `gc.ttlseconds` to keep these ranges smaller for reliable performance.
If your cluster is running in multiple datacenters, it's a best practice to configure the meta ranges to have a copy in each datacenter.
-`.liveness` | The "liveness" range contains the authoritative information about which nodes are live at any given time.
These ranges must retain a majority of replicas for the cluster as a whole to remain available and historical queries are never run on them, so CockroachDB comes with a **pre-configured** `.liveness` replication zone with `num_replicas` set to 5 to make these ranges more resilient to node failure and a lower-than-default `gc.ttlseconds` to keep these ranges smaller for reliable performance.
-`.system` | There are system ranges for a variety of other important internal data, including information needed to allocate new table IDs and track the status of a cluster's nodes.
These ranges must retain a majority of replicas for the cluster as a whole to remain available, so CockroachDB comes with a **pre-configured** `.system` replication zone with `num_replicas` set to 5 to make these ranges more resilient to node failure.
-`.timeseries` | The "timeseries" ranges contain monitoring data about the cluster that powers the graphs in CockroachDB's Admin UI. If necessary, you can add a `.timeseries` replication zone to control the replication of this data.
-
-{{site.data.alerts.callout_danger}}
-Use caution when editing replication zones for system ranges, as they could cause some (or all) parts of your cluster to stop working.
-{{site.data.alerts.end}}
-
-To control replication for one of the above sets of system ranges, use the [`ALTER RANGE ... CONFIGURE ZONE`](configure-zone.html) statement to define the values you want to change (other values will not be affected):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER RANGE meta CONFIGURE ZONE USING num_replicas = 7;
-~~~
-
-~~~
-CONFIGURE ZONE 1
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW ZONE CONFIGURATION FOR RANGE meta;
-~~~
-
-~~~
- zone_name | config_sql
-+-----------+---------------------------------------+
- .meta | ALTER RANGE meta CONFIGURE ZONE USING
- | range_min_bytes = 1048576,
- | range_max_bytes = 67108864,
- | gc.ttlseconds = 3600,
- | num_replicas = 7,
- | constraints = '[]',
- | lease_preferences = '[]'
-(1 row)
-~~~
diff --git a/src/current/_includes/v2.1/zone-configs/create-a-replication-zone-for-a-table-partition.md b/src/current/_includes/v2.1/zone-configs/create-a-replication-zone-for-a-table-partition.md
deleted file mode 100644
index 6e2ac1677fd..00000000000
--- a/src/current/_includes/v2.1/zone-configs/create-a-replication-zone-for-a-table-partition.md
+++ /dev/null
@@ -1,36 +0,0 @@
-{{site.data.alerts.callout_info}}
-This is an [enterprise-only](enterprise-licensing.html) feature.
-{{site.data.alerts.end}}
-
-To [control replication for table partitions](partitioning.html#replication-zones), use the `ALTER PARTITION ... CONFIGURE ZONE` statement to define the values you want to change (other values will not be affected):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER PARTITION north_america OF TABLE customers CONFIGURE ZONE USING num_replicas = 5, constraints = '[-region=EU]';
-~~~
-
-~~~ sql
-CONFIGURE ZONE 1
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW ZONE CONFIGURATION FOR PARTITION north_america OF TABLE customers;
-~~~
-
-~~~
- zone_name | config_sql
-+------------------------------+-------------------------------------------------------------------------------+
- test.customers.north_america | ALTER PARTITION north_america OF INDEX customers@primary CONFIGURE ZONE USING
- | range_min_bytes = 1048576,
- | range_max_bytes = 67108864,
- | gc.ttlseconds = 100000,
- | num_replicas = 5,
- | constraints = '[-region=EU]',
- | lease_preferences = '[]'
-(1 row)
-~~~
-
-{{site.data.alerts.callout_success}}
-Since the syntax is the same for defining a replication zone for a table or index partition (e.g., `database.table.partition`), give partitions names that communicate what they are partitioning, e.g., `north_america_table` vs `north_america_idx1`.
-{{site.data.alerts.end}}
diff --git a/src/current/_includes/v2.1/zone-configs/create-a-replication-zone-for-a-table.md b/src/current/_includes/v2.1/zone-configs/create-a-replication-zone-for-a-table.md
deleted file mode 100644
index 468df8f9bac..00000000000
--- a/src/current/_includes/v2.1/zone-configs/create-a-replication-zone-for-a-table.md
+++ /dev/null
@@ -1,28 +0,0 @@
-To control replication for a specific table, use the `ALTER TABLE ... CONFIGURE ZONE` statement to define the values you want to change (other values will not be affected):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE customers CONFIGURE ZONE USING num_replicas = 5, gc.ttlseconds = 100000;
-~~~
-
-~~~
-CONFIGURE ZONE 1
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW ZONE CONFIGURATION FOR TABLE customers;
-~~~
-
-~~~
- zone_name | config_sql
-+----------------+--------------------------------------------+
- test.customers | ALTER TABLE customers CONFIGURE ZONE USING
- | range_min_bytes = 1048576,
- | range_max_bytes = 67108864,
- | gc.ttlseconds = 100000,
- | num_replicas = 5,
- | constraints = '[]',
- | lease_preferences = '[]'
-(1 row)
-~~~
diff --git a/src/current/_includes/v2.1/zone-configs/edit-the-default-replication-zone.md b/src/current/_includes/v2.1/zone-configs/edit-the-default-replication-zone.md
deleted file mode 100644
index 1dd1cbbf43b..00000000000
--- a/src/current/_includes/v2.1/zone-configs/edit-the-default-replication-zone.md
+++ /dev/null
@@ -1,32 +0,0 @@
-{{site.data.alerts.callout_info}}
-{% include {{page.version.version}}/known-limitations/system-range-replication.md %}
-{{site.data.alerts.end}}
-
-To edit the default replication zone, use the `ALTER RANGE ... CONFIGURE ZONE` statement to define the values you want to change (other values will remain the same):
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER RANGE default CONFIGURE ZONE USING num_replicas = 5, gc.ttlseconds = 100000;
-~~~
-
-~~~
-CONFIGURE ZONE 1
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW ZONE CONFIGURATION FOR RANGE default;
-~~~
-
-~~~
- zone_name | config_sql
-+-----------+------------------------------------------+
- .default | ALTER RANGE default CONFIGURE ZONE USING
- | range_min_bytes = 1048576,
- | range_max_bytes = 67108864,
- | gc.ttlseconds = 100000,
- | num_replicas = 5,
- | constraints = '[]',
- | lease_preferences = '[]'
-(1 row)
-~~~
diff --git a/src/current/_includes/v2.1/zone-configs/remove-a-replication-zone.md b/src/current/_includes/v2.1/zone-configs/remove-a-replication-zone.md
deleted file mode 100644
index b379652c8c8..00000000000
--- a/src/current/_includes/v2.1/zone-configs/remove-a-replication-zone.md
+++ /dev/null
@@ -1,8 +0,0 @@
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE t CONFIGURE ZONE DISCARD;
-~~~
-
-~~~
-CONFIGURE ZONE 1
-~~~
diff --git a/src/current/_includes/v2.1/zone-configs/reset-a-replication-zone.md b/src/current/_includes/v2.1/zone-configs/reset-a-replication-zone.md
deleted file mode 100644
index 60474c84a5d..00000000000
--- a/src/current/_includes/v2.1/zone-configs/reset-a-replication-zone.md
+++ /dev/null
@@ -1,8 +0,0 @@
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE t CONFIGURE ZONE USING DEFAULT;
-~~~
-
-~~~
-CONFIGURE ZONE 1
-~~~
diff --git a/src/current/_includes/v2.1/zone-configs/variables.md b/src/current/_includes/v2.1/zone-configs/variables.md
deleted file mode 100644
index 8a4751e570d..00000000000
--- a/src/current/_includes/v2.1/zone-configs/variables.md
+++ /dev/null
@@ -1,8 +0,0 @@
-Variable | Description
-------|------------
-`range_min_bytes` | The minimum size, in bytes, for a range of data in the zone. When a range is less than this size, CockroachDB will merge it with an adjacent range.
**Default:** `1048576` (1MiB)
-`range_max_bytes` | The maximum size, in bytes, for a range of data in the zone. When a range reaches this size, CockroachDB will split it into two ranges.
**Default:** `67108864` (64MiB)
- `gc.ttlseconds` | The number of seconds overwritten values will be retained before garbage collection. Smaller values can save disk space if values are frequently overwritten; larger values increase the range allowed for `AS OF SYSTEM TIME` queries, also know as [Time Travel Queries](select-clause.html#select-historical-data-time-travel).
It is not recommended to set this below `600` (10 minutes); doing so will cause problems for long-running queries. Also, since all versions of a row are stored in a single range that never splits, it is not recommended to set this so high that all the changes to a row in that time period could add up to more than 64MiB; such oversized ranges could contribute to the server running out of memory or other problems.
**Default:** `90000` (25 hours)
-`num_replicas` | The number of replicas in the zone.
**Default:** `3`
For the `system` database and `.meta`, `.liveness`, and `.system` ranges, the default value is `5`.
-`constraints` | An array of required (`+`) and/or prohibited (`-`) constraints influencing the location of replicas. See [Types of Constraints](configure-replication-zones.html#types-of-constraints) and [Scope of Constraints](configure-replication-zones.html#scope-of-constraints) for more details.
To prevent hard-to-detect typos, constraints placed on [store attributes and node localities](configure-replication-zones.html#descriptive-attributes-assigned-to-nodes) must match the values passed to at least one node in the cluster. If not, an error is signalled.
**Default:** No constraints, with CockroachDB locating each replica on a unique node and attempting to spread replicas evenly across localities.
-`lease_preferences` | An ordered list of required and/or prohibited constraints influencing the location of [leaseholders](architecture/overview.html#glossary). Whether each constraint is required or prohibited is expressed with a leading `+` or `-`, respectively. Note that lease preference constraints do not have to be shared with the `constraints` field. For example, it's valid for your configuration to define a `lease_preferences` field that does not reference any values from the `constraints` field. It's also valid to define a `lease_preferences` field with no `constraints` field at all.
If the first preference cannot be satisfied, CockroachDB will attempt to satisfy the second preference, and so on. If none of the preferences can be met, the lease will be placed using the default lease placement algorithm, which is to base lease placement decisions on how many leases each node already has, trying to make all the nodes have around the same amount.
Each value in the list can include multiple constraints. For example, the list `[[+zone=us-east-1b, +ssd], [+zone=us-east-1a], [+zone=us-east-1c, +ssd]]` means "prefer nodes with an SSD in `us-east-1b`, then any nodes in `us-east-1a`, then nodes in `us-east-1c` with an SSD."
For a usage example, see [Constrain leaseholders to specific datacenters](configure-replication-zones.html#constrain-leaseholders-to-specific-datacenters).
**Default**: No lease location preferences are applied if this field is not specified.
diff --git a/src/current/_includes/v2.1/zone-configs/view-all-replication-zones.md b/src/current/_includes/v2.1/zone-configs/view-all-replication-zones.md
deleted file mode 100644
index 076286064a1..00000000000
--- a/src/current/_includes/v2.1/zone-configs/view-all-replication-zones.md
+++ /dev/null
@@ -1,52 +0,0 @@
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW ALL ZONE CONFIGURATIONS;
-~~~
-
-~~~
- zone_name | config_sql
-+-------------+-----------------------------------------------------+
- .default | ALTER RANGE default CONFIGURE ZONE USING
- | range_min_bytes = 1048576,
- | range_max_bytes = 67108864,
- | gc.ttlseconds = 90000,
- | num_replicas = 3,
- | constraints = '[]',
- | lease_preferences = '[]'
- system | ALTER DATABASE system CONFIGURE ZONE USING
- | range_min_bytes = 1048576,
- | range_max_bytes = 67108864,
- | gc.ttlseconds = 90000,
- | num_replicas = 5,
- | constraints = '[]',
- | lease_preferences = '[]'
- system.jobs | ALTER TABLE system.public.jobs CONFIGURE ZONE USING
- | range_min_bytes = 1048576,
- | range_max_bytes = 67108864,
- | gc.ttlseconds = 600,
- | num_replicas = 5,
- | constraints = '[]',
- | lease_preferences = '[]'
- .meta | ALTER RANGE meta CONFIGURE ZONE USING
- | range_min_bytes = 1048576,
- | range_max_bytes = 67108864,
- | gc.ttlseconds = 3600,
- | num_replicas = 5,
- | constraints = '[]',
- | lease_preferences = '[]'
- .system | ALTER RANGE system CONFIGURE ZONE USING
- | range_min_bytes = 1048576,
- | range_max_bytes = 67108864,
- | gc.ttlseconds = 90000,
- | num_replicas = 5,
- | constraints = '[]',
- | lease_preferences = '[]'
- .liveness | ALTER RANGE liveness CONFIGURE ZONE USING
- | range_min_bytes = 1048576,
- | range_max_bytes = 67108864,
- | gc.ttlseconds = 600,
- | num_replicas = 5,
- | constraints = '[]',
- | lease_preferences = '[]'
-(6 rows)
-~~~
diff --git a/src/current/_includes/v2.1/zone-configs/view-the-default-replication-zone.md b/src/current/_includes/v2.1/zone-configs/view-the-default-replication-zone.md
deleted file mode 100644
index 05120116574..00000000000
--- a/src/current/_includes/v2.1/zone-configs/view-the-default-replication-zone.md
+++ /dev/null
@@ -1,17 +0,0 @@
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW ZONE CONFIGURATION FOR RANGE default;
-~~~
-
-~~~
- zone_name | config_sql
-+-----------+------------------------------------------+
- .default | ALTER RANGE default CONFIGURE ZONE USING
- | range_min_bytes = 1048576,
- | range_max_bytes = 67108864,
- | gc.ttlseconds = 90000,
- | num_replicas = 3,
- | constraints = '[]',
- | lease_preferences = '[]'
-(1 row)
-~~~
diff --git a/src/current/_includes/v2.1/zone-configs/view-the-replication-zone-for-a-database.md b/src/current/_includes/v2.1/zone-configs/view-the-replication-zone-for-a-database.md
deleted file mode 100644
index 2d65a6aebdd..00000000000
--- a/src/current/_includes/v2.1/zone-configs/view-the-replication-zone-for-a-database.md
+++ /dev/null
@@ -1,16 +0,0 @@
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW ZONE CONFIGURATION FOR DATABASE tpch;
-~~~
-~~~
- zone_name | config_sql
-+-----------+------------------------------------------+
- tpch | ALTER DATABASE tpch CONFIGURE ZONE USING
- | range_min_bytes = 1048576,
- | range_max_bytes = 67108864,
- | gc.ttlseconds = 90000,
- | num_replicas = 3,
- | constraints = '[]',
- | lease_preferences = '[]'
-(1 row)
-~~~
diff --git a/src/current/_includes/v2.1/zone-configs/view-the-replication-zone-for-a-partition.md b/src/current/_includes/v2.1/zone-configs/view-the-replication-zone-for-a-partition.md
deleted file mode 100644
index 9fc68033116..00000000000
--- a/src/current/_includes/v2.1/zone-configs/view-the-replication-zone-for-a-partition.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW ZONE CONFIGURATION FOR PARTITION north_america OF TABLE roachlearn.students;
-~~~
-
-~~~
- zone_name | config_sql
-+-----------------------------------+------------------------------------------------------------------------------------------------+
- roachlearn.students.north_america | ALTER PARTITION north_america OF INDEX roachlearn.public.students@primary CONFIGURE ZONE USING
- | range_min_bytes = 16777216,
- | range_max_bytes = 67108864,
- | gc.ttlseconds = 90000,
- | num_replicas = 3,
- | constraints = '[+region=us]',
- | lease_preferences = '[]'
-~~~
diff --git a/src/current/_includes/v2.1/zone-configs/view-the-replication-zone-for-a-table.md b/src/current/_includes/v2.1/zone-configs/view-the-replication-zone-for-a-table.md
deleted file mode 100644
index 5de95591be7..00000000000
--- a/src/current/_includes/v2.1/zone-configs/view-the-replication-zone-for-a-table.md
+++ /dev/null
@@ -1,16 +0,0 @@
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW ZONE CONFIGURATION FOR TABLE tpch.customer;
-~~~
-~~~
- zone_name | config_sql
-+---------------+-------------------------------------------------------+
- tpch.customer | ALTER TABLE tpch.public.customer CONFIGURE ZONE USING
- | range_min_bytes = 40000,
- | range_max_bytes = 67108864,
- | gc.ttlseconds = 90000,
- | num_replicas = 3,
- | constraints = '[]',
- | lease_preferences = '[]'
-(1 row)
-~~~
diff --git a/src/current/_includes/v2.1/zone-configs/view-the-replication-zone-for-an-index.md b/src/current/_includes/v2.1/zone-configs/view-the-replication-zone-for-an-index.md
deleted file mode 100644
index e50f56ed779..00000000000
--- a/src/current/_includes/v2.1/zone-configs/view-the-replication-zone-for-an-index.md
+++ /dev/null
@@ -1,16 +0,0 @@
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW ZONE CONFIGURATION FOR INDEX tpch.customer@frequent_customers;
-~~~
-~~~
- zone_name | config_sql
-+---------------+-------------------------------------------------------+
- tpch.customer | ALTER TABLE tpch.public.customer CONFIGURE ZONE USING
- | range_min_bytes = 40000,
- | range_max_bytes = 67108864,
- | gc.ttlseconds = 90000,
- | num_replicas = 3,
- | constraints = '[]',
- | lease_preferences = '[]'
-(1 row)
-~~~
diff --git a/src/current/_includes/v20.2/sql/shell-help.md b/src/current/_includes/v20.2/sql/shell-help.md
index 24bbc0be362..2380974ee2b 100644
--- a/src/current/_includes/v20.2/sql/shell-help.md
+++ b/src/current/_includes/v20.2/sql/shell-help.md
@@ -25,7 +25,7 @@ See also:
INSERT
UPSERT
DELETE
- https://www.cockroachlabs.com/docs/v2.1/update.html
+ https://www.cockroachlabs.com/docs/stable/update.html
~~~
~~~ sql
@@ -41,5 +41,5 @@ Signature Category
uuid_v4() -> bytes [ID Generation]
See also:
- https://www.cockroachlabs.com/docs/v2.1/functions-and-operators.html
+ https://www.cockroachlabs.com/docs/stable/functions-and-operators.html
~~~
diff --git a/src/current/_includes/v21.1/sql/shell-help.md b/src/current/_includes/v21.1/sql/shell-help.md
index 627ad837132..cbfd4037587 100644
--- a/src/current/_includes/v21.1/sql/shell-help.md
+++ b/src/current/_includes/v21.1/sql/shell-help.md
@@ -25,7 +25,7 @@ See also:
INSERT
UPSERT
DELETE
- https://www.cockroachlabs.com/docs/v2.1/update.html
+ https://www.cockroachlabs.com/docs/stable/update.html
~~~
~~~ sql
@@ -41,5 +41,5 @@ Signature Category
uuid_v4() -> bytes [ID Generation]
See also:
- https://www.cockroachlabs.com/docs/v2.1/functions-and-operators.html
+ https://www.cockroachlabs.com/docs/stable/functions-and-operators.html
~~~
diff --git a/src/current/_includes/v21.2/sql/shell-help.md b/src/current/_includes/v21.2/sql/shell-help.md
index 627ad837132..cbfd4037587 100644
--- a/src/current/_includes/v21.2/sql/shell-help.md
+++ b/src/current/_includes/v21.2/sql/shell-help.md
@@ -25,7 +25,7 @@ See also:
INSERT
UPSERT
DELETE
- https://www.cockroachlabs.com/docs/v2.1/update.html
+ https://www.cockroachlabs.com/docs/stable/update.html
~~~
~~~ sql
@@ -41,5 +41,5 @@ Signature Category
uuid_v4() -> bytes [ID Generation]
See also:
- https://www.cockroachlabs.com/docs/v2.1/functions-and-operators.html
+ https://www.cockroachlabs.com/docs/stable/functions-and-operators.html
~~~
diff --git a/src/current/_includes/v22.1/sql/shell-help.md b/src/current/_includes/v22.1/sql/shell-help.md
index 627ad837132..cbfd4037587 100644
--- a/src/current/_includes/v22.1/sql/shell-help.md
+++ b/src/current/_includes/v22.1/sql/shell-help.md
@@ -25,7 +25,7 @@ See also:
INSERT
UPSERT
DELETE
- https://www.cockroachlabs.com/docs/v2.1/update.html
+ https://www.cockroachlabs.com/docs/stable/update.html
~~~
~~~ sql
@@ -41,5 +41,5 @@ Signature Category
uuid_v4() -> bytes [ID Generation]
See also:
- https://www.cockroachlabs.com/docs/v2.1/functions-and-operators.html
+ https://www.cockroachlabs.com/docs/stable/functions-and-operators.html
~~~
diff --git a/src/current/_plugins/sidebar_htmltest.rb b/src/current/_plugins/sidebar_htmltest.rb
index 970926334d0..9d4fb322411 100644
--- a/src/current/_plugins/sidebar_htmltest.rb
+++ b/src/current/_plugins/sidebar_htmltest.rb
@@ -1,16 +1,36 @@
require 'json'
require 'liquid'
+require 'yaml'
module SidebarHTMLTest
class Generator < Jekyll::Generator
def generate(site)
@site = site
-
+
+ # Read htmltest configuration to get ignored directories
+ htmltest_config = YAML.load_file('.htmltest.yml') rescue {}
+ ignored_dirs = htmltest_config['IgnoreDirs'] || []
+
+ # Extract version numbers from ignored directories
+ ignored_versions = ignored_dirs.map do |dir|
+ match = dir.match(/\^?docs\/?(v\d+\.\d+)/)
+ match[1] if match
+ end.compact
+
Dir[File.join(site.config['includes_dir'], 'sidebar-data-v*.json')].each do |f|
next unless !!site.config['cockroachcloud'] == f.include?('cockroachcloud')
+
+ # Extract version from filename
+ version = f.match(/sidebar-data-(v\d+\.\d+)/)[1]
+
+ # Skip if this version is in the ignored list
+ if ignored_versions.include?(version)
+ Jekyll.logger.info "SidebarHTMLTest:", "Skipping ignored version #{version}"
+ next
+ end
+
partial = site.liquid_renderer.file(f).parse(File.read(f))
json = partial.render!(site.site_payload, {registers: {site: site}})
- version = f.match(/sidebar-data-(v\d+\.\d+)/)[1]
render_sidebar(json, version)
end
end
diff --git a/src/current/images/v2.1/dbeaver-01-select-cockroachdb.png b/src/current/images/common/dbeaver/dbeaver-01-select-cockroachdb.png
similarity index 100%
rename from src/current/images/v2.1/dbeaver-01-select-cockroachdb.png
rename to src/current/images/common/dbeaver/dbeaver-01-select-cockroachdb.png
diff --git a/src/current/images/v2.1/dbeaver-02-cockroachdb-connection-settings.png b/src/current/images/common/dbeaver/dbeaver-02-cockroachdb-connection-settings.png
similarity index 100%
rename from src/current/images/v2.1/dbeaver-02-cockroachdb-connection-settings.png
rename to src/current/images/common/dbeaver/dbeaver-02-cockroachdb-connection-settings.png
diff --git a/src/current/images/v2.1/dbeaver-03-ssl-tab.png b/src/current/images/common/dbeaver/dbeaver-03-ssl-tab.png
similarity index 100%
rename from src/current/images/v2.1/dbeaver-03-ssl-tab.png
rename to src/current/images/common/dbeaver/dbeaver-03-ssl-tab.png
diff --git a/src/current/images/v2.1/dbeaver-04-connection-success-dialog.png b/src/current/images/common/dbeaver/dbeaver-04-connection-success-dialog.png
similarity index 100%
rename from src/current/images/v2.1/dbeaver-04-connection-success-dialog.png
rename to src/current/images/common/dbeaver/dbeaver-04-connection-success-dialog.png
diff --git a/src/current/images/v2.1/dbeaver-05-movr.png b/src/current/images/common/dbeaver/dbeaver-05-movr.png
similarity index 100%
rename from src/current/images/v2.1/dbeaver-05-movr.png
rename to src/current/images/common/dbeaver/dbeaver-05-movr.png
diff --git a/src/current/images/v2.1/intellij/01_database_tool_window.png b/src/current/images/common/intellij/01_database_tool_window.png
similarity index 100%
rename from src/current/images/v2.1/intellij/01_database_tool_window.png
rename to src/current/images/common/intellij/01_database_tool_window.png
diff --git a/src/current/images/v2.1/intellij/02_postgresql_data_source.png b/src/current/images/common/intellij/02_postgresql_data_source.png
similarity index 100%
rename from src/current/images/v2.1/intellij/02_postgresql_data_source.png
rename to src/current/images/common/intellij/02_postgresql_data_source.png
diff --git a/src/current/images/v2.1/intellij/03_general_tab.png b/src/current/images/common/intellij/03_general_tab.png
similarity index 100%
rename from src/current/images/v2.1/intellij/03_general_tab.png
rename to src/current/images/common/intellij/03_general_tab.png
diff --git a/src/current/images/v2.1/intellij/04_options_tab.png b/src/current/images/common/intellij/04_options_tab.png
similarity index 100%
rename from src/current/images/v2.1/intellij/04_options_tab.png
rename to src/current/images/common/intellij/04_options_tab.png
diff --git a/src/current/images/v2.1/intellij/42073_error_column_n_xmin_does_not_exist.png b/src/current/images/common/intellij/42073_error_column_n_xmin_does_not_exist.png
similarity index 100%
rename from src/current/images/v2.1/intellij/42073_error_column_n_xmin_does_not_exist.png
rename to src/current/images/common/intellij/42073_error_column_n_xmin_does_not_exist.png
diff --git a/src/current/images/v2.1/intellij/42883_error_pg_function_is_visible.png b/src/current/images/common/intellij/42883_error_pg_function_is_visible.png
similarity index 100%
rename from src/current/images/v2.1/intellij/42883_error_pg_function_is_visible.png
rename to src/current/images/common/intellij/42883_error_pg_function_is_visible.png
diff --git a/src/current/images/v2.1/intellij/XX000_error_could_not_decorrelate_subquery.png b/src/current/images/common/intellij/XX000_error_could_not_decorrelate_subquery.png
similarity index 100%
rename from src/current/images/v2.1/intellij/XX000_error_could_not_decorrelate_subquery.png
rename to src/current/images/common/intellij/XX000_error_could_not_decorrelate_subquery.png
diff --git a/src/current/images/v2.1/intellij/error_could_not_decorrelate_subquery.png b/src/current/images/common/intellij/error_could_not_decorrelate_subquery.png
similarity index 100%
rename from src/current/images/v2.1/intellij/error_could_not_decorrelate_subquery.png
rename to src/current/images/common/intellij/error_could_not_decorrelate_subquery.png
diff --git a/src/current/images/v2.1/kubernetes-alertmanager-home.png b/src/current/images/common/kubernetes/kubernetes-alertmanager-home.png
similarity index 100%
rename from src/current/images/v2.1/kubernetes-alertmanager-home.png
rename to src/current/images/common/kubernetes/kubernetes-alertmanager-home.png
diff --git a/src/current/images/v2.1/kubernetes-prometheus-alertmanagers.png b/src/current/images/common/kubernetes/kubernetes-prometheus-alertmanagers.png
similarity index 100%
rename from src/current/images/v2.1/kubernetes-prometheus-alertmanagers.png
rename to src/current/images/common/kubernetes/kubernetes-prometheus-alertmanagers.png
diff --git a/src/current/images/v2.1/kubernetes-prometheus-alertrules.png b/src/current/images/common/kubernetes/kubernetes-prometheus-alertrules.png
similarity index 100%
rename from src/current/images/v2.1/kubernetes-prometheus-alertrules.png
rename to src/current/images/common/kubernetes/kubernetes-prometheus-alertrules.png
diff --git a/src/current/images/v2.1/kubernetes-prometheus-alerts.png b/src/current/images/common/kubernetes/kubernetes-prometheus-alerts.png
similarity index 100%
rename from src/current/images/v2.1/kubernetes-prometheus-alerts.png
rename to src/current/images/common/kubernetes/kubernetes-prometheus-alerts.png
diff --git a/src/current/images/v2.1/kubernetes-prometheus-graph.png b/src/current/images/common/kubernetes/kubernetes-prometheus-graph.png
similarity index 100%
rename from src/current/images/v2.1/kubernetes-prometheus-graph.png
rename to src/current/images/common/kubernetes/kubernetes-prometheus-graph.png
diff --git a/src/current/images/v2.1/kubernetes-prometheus-targets.png b/src/current/images/common/kubernetes/kubernetes-prometheus-targets.png
similarity index 100%
rename from src/current/images/v2.1/kubernetes-prometheus-targets.png
rename to src/current/images/common/kubernetes/kubernetes-prometheus-targets.png
diff --git a/src/current/images/common/kubernetes/kubernetes-upgrade.png b/src/current/images/common/kubernetes/kubernetes-upgrade.png
new file mode 100644
index 00000000000..497559cef73
Binary files /dev/null and b/src/current/images/common/kubernetes/kubernetes-upgrade.png differ
diff --git a/src/current/images/v2.1/2automated-scaling-repair.png b/src/current/images/v2.1/2automated-scaling-repair.png
deleted file mode 100644
index 2402db24d75..00000000000
Binary files a/src/current/images/v2.1/2automated-scaling-repair.png and /dev/null differ
diff --git a/src/current/images/v2.1/2distributed-transactions.png b/src/current/images/v2.1/2distributed-transactions.png
deleted file mode 100644
index 52fc2d11943..00000000000
Binary files a/src/current/images/v2.1/2distributed-transactions.png and /dev/null differ
diff --git a/src/current/images/v2.1/2go-implementation.png b/src/current/images/v2.1/2go-implementation.png
deleted file mode 100644
index e5729f51cfb..00000000000
Binary files a/src/current/images/v2.1/2go-implementation.png and /dev/null differ
diff --git a/src/current/images/v2.1/2open-source.png b/src/current/images/v2.1/2open-source.png
deleted file mode 100644
index b2a936d8d29..00000000000
Binary files a/src/current/images/v2.1/2open-source.png and /dev/null differ
diff --git a/src/current/images/v2.1/2simplified-deployments.png b/src/current/images/v2.1/2simplified-deployments.png
deleted file mode 100644
index 15576d1ae5d..00000000000
Binary files a/src/current/images/v2.1/2simplified-deployments.png and /dev/null differ
diff --git a/src/current/images/v2.1/2strong-consistency.png b/src/current/images/v2.1/2strong-consistency.png
deleted file mode 100644
index 571dc01761d..00000000000
Binary files a/src/current/images/v2.1/2strong-consistency.png and /dev/null differ
diff --git a/src/current/images/v2.1/CockroachDB_Training_Wide.png b/src/current/images/v2.1/CockroachDB_Training_Wide.png
deleted file mode 100644
index 0844c2b50e0..00000000000
Binary files a/src/current/images/v2.1/CockroachDB_Training_Wide.png and /dev/null differ
diff --git a/src/current/images/v2.1/Parallel_Statement_Execution_Error_Mismatch.png b/src/current/images/v2.1/Parallel_Statement_Execution_Error_Mismatch.png
deleted file mode 100644
index f60360c9598..00000000000
Binary files a/src/current/images/v2.1/Parallel_Statement_Execution_Error_Mismatch.png and /dev/null differ
diff --git a/src/current/images/v2.1/Parallel_Statement_Hybrid_Execution.png b/src/current/images/v2.1/Parallel_Statement_Hybrid_Execution.png
deleted file mode 100644
index a4edf85dc02..00000000000
Binary files a/src/current/images/v2.1/Parallel_Statement_Hybrid_Execution.png and /dev/null differ
diff --git a/src/current/images/v2.1/Parallel_Statement_Normal_Execution.png b/src/current/images/v2.1/Parallel_Statement_Normal_Execution.png
deleted file mode 100644
index df63ab1da01..00000000000
Binary files a/src/current/images/v2.1/Parallel_Statement_Normal_Execution.png and /dev/null differ
diff --git a/src/current/images/v2.1/Sequential_Statement_Execution.png b/src/current/images/v2.1/Sequential_Statement_Execution.png
deleted file mode 100644
index 99c47c51664..00000000000
Binary files a/src/current/images/v2.1/Sequential_Statement_Execution.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin-ui-cluster-overview-panel.png b/src/current/images/v2.1/admin-ui-cluster-overview-panel.png
deleted file mode 100644
index ee906077ee8..00000000000
Binary files a/src/current/images/v2.1/admin-ui-cluster-overview-panel.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin-ui-custom-chart-debug-00.png b/src/current/images/v2.1/admin-ui-custom-chart-debug-00.png
deleted file mode 100644
index a82305beffd..00000000000
Binary files a/src/current/images/v2.1/admin-ui-custom-chart-debug-00.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin-ui-custom-chart-debug-01.png b/src/current/images/v2.1/admin-ui-custom-chart-debug-01.png
deleted file mode 100644
index f8b9162f14e..00000000000
Binary files a/src/current/images/v2.1/admin-ui-custom-chart-debug-01.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin-ui-node-components.png b/src/current/images/v2.1/admin-ui-node-components.png
deleted file mode 100644
index 2ed730ff80c..00000000000
Binary files a/src/current/images/v2.1/admin-ui-node-components.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin-ui-node-list.png b/src/current/images/v2.1/admin-ui-node-list.png
deleted file mode 100644
index 14b7b87d58d..00000000000
Binary files a/src/current/images/v2.1/admin-ui-node-list.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin-ui-node-map-after-license.png b/src/current/images/v2.1/admin-ui-node-map-after-license.png
deleted file mode 100644
index fa47a7b579f..00000000000
Binary files a/src/current/images/v2.1/admin-ui-node-map-after-license.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin-ui-node-map-before-license.png b/src/current/images/v2.1/admin-ui-node-map-before-license.png
deleted file mode 100644
index f352e214868..00000000000
Binary files a/src/current/images/v2.1/admin-ui-node-map-before-license.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin-ui-node-map-complete.png b/src/current/images/v2.1/admin-ui-node-map-complete.png
deleted file mode 100644
index 46b1c38d4bf..00000000000
Binary files a/src/current/images/v2.1/admin-ui-node-map-complete.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin-ui-node-map-navigation.gif b/src/current/images/v2.1/admin-ui-node-map-navigation.gif
deleted file mode 100644
index 67ce2dc009c..00000000000
Binary files a/src/current/images/v2.1/admin-ui-node-map-navigation.gif and /dev/null differ
diff --git a/src/current/images/v2.1/admin-ui-node-map.png b/src/current/images/v2.1/admin-ui-node-map.png
deleted file mode 100644
index c1e0b83a3dc..00000000000
Binary files a/src/current/images/v2.1/admin-ui-node-map.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin-ui-region-component.png b/src/current/images/v2.1/admin-ui-region-component.png
deleted file mode 100644
index c36a362d107..00000000000
Binary files a/src/current/images/v2.1/admin-ui-region-component.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin-ui-single-node.gif b/src/current/images/v2.1/admin-ui-single-node.gif
deleted file mode 100644
index f60d25b0e2a..00000000000
Binary files a/src/current/images/v2.1/admin-ui-single-node.gif and /dev/null differ
diff --git a/src/current/images/v2.1/admin-ui-statements-page.png b/src/current/images/v2.1/admin-ui-statements-page.png
deleted file mode 100644
index aec34103196..00000000000
Binary files a/src/current/images/v2.1/admin-ui-statements-page.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin-ui-time-range.gif b/src/current/images/v2.1/admin-ui-time-range.gif
deleted file mode 100644
index c28807b9a1b..00000000000
Binary files a/src/current/images/v2.1/admin-ui-time-range.gif and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_available_disk_capacity.png b/src/current/images/v2.1/admin_ui_available_disk_capacity.png
deleted file mode 100644
index 7ee4c2c5359..00000000000
Binary files a/src/current/images/v2.1/admin_ui_available_disk_capacity.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_capacity.png b/src/current/images/v2.1/admin_ui_capacity.png
deleted file mode 100644
index 1e9085851af..00000000000
Binary files a/src/current/images/v2.1/admin_ui_capacity.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_cpu_percent.png b/src/current/images/v2.1/admin_ui_cpu_percent.png
deleted file mode 100644
index dae468b6d6f..00000000000
Binary files a/src/current/images/v2.1/admin_ui_cpu_percent.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_cpu_time.png b/src/current/images/v2.1/admin_ui_cpu_time.png
deleted file mode 100644
index 3e81817ca38..00000000000
Binary files a/src/current/images/v2.1/admin_ui_cpu_time.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_database_grants_view.png b/src/current/images/v2.1/admin_ui_database_grants_view.png
deleted file mode 100644
index c21145da9f9..00000000000
Binary files a/src/current/images/v2.1/admin_ui_database_grants_view.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_database_tables_view.png b/src/current/images/v2.1/admin_ui_database_tables_view.png
deleted file mode 100644
index 27ecf789e8a..00000000000
Binary files a/src/current/images/v2.1/admin_ui_database_tables_view.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_disk_iops.png b/src/current/images/v2.1/admin_ui_disk_iops.png
deleted file mode 100644
index f0f553547e3..00000000000
Binary files a/src/current/images/v2.1/admin_ui_disk_iops.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_disk_read_bytes.png b/src/current/images/v2.1/admin_ui_disk_read_bytes.png
deleted file mode 100644
index 15bcb584f55..00000000000
Binary files a/src/current/images/v2.1/admin_ui_disk_read_bytes.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_disk_read_ops.png b/src/current/images/v2.1/admin_ui_disk_read_ops.png
deleted file mode 100644
index 55b356f84ec..00000000000
Binary files a/src/current/images/v2.1/admin_ui_disk_read_ops.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_disk_read_time.png b/src/current/images/v2.1/admin_ui_disk_read_time.png
deleted file mode 100644
index fd340744135..00000000000
Binary files a/src/current/images/v2.1/admin_ui_disk_read_time.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_disk_write_bytes.png b/src/current/images/v2.1/admin_ui_disk_write_bytes.png
deleted file mode 100644
index e3fd5fccdad..00000000000
Binary files a/src/current/images/v2.1/admin_ui_disk_write_bytes.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_disk_write_ops.png b/src/current/images/v2.1/admin_ui_disk_write_ops.png
deleted file mode 100644
index 9e493d69f88..00000000000
Binary files a/src/current/images/v2.1/admin_ui_disk_write_ops.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_disk_write_time.png b/src/current/images/v2.1/admin_ui_disk_write_time.png
deleted file mode 100644
index 3cd023ffd40..00000000000
Binary files a/src/current/images/v2.1/admin_ui_disk_write_time.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_events.png b/src/current/images/v2.1/admin_ui_events.png
deleted file mode 100644
index 3d3a4738c78..00000000000
Binary files a/src/current/images/v2.1/admin_ui_events.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_file_descriptors.png b/src/current/images/v2.1/admin_ui_file_descriptors.png
deleted file mode 100644
index 42187c9878d..00000000000
Binary files a/src/current/images/v2.1/admin_ui_file_descriptors.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_hovering.gif b/src/current/images/v2.1/admin_ui_hovering.gif
deleted file mode 100644
index 1795471051f..00000000000
Binary files a/src/current/images/v2.1/admin_ui_hovering.gif and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_jobs_page.png b/src/current/images/v2.1/admin_ui_jobs_page.png
deleted file mode 100644
index a9f07a785a3..00000000000
Binary files a/src/current/images/v2.1/admin_ui_jobs_page.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_jobs_page_new.png b/src/current/images/v2.1/admin_ui_jobs_page_new.png
deleted file mode 100644
index dd07672cde0..00000000000
Binary files a/src/current/images/v2.1/admin_ui_jobs_page_new.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_memory_usage.png b/src/current/images/v2.1/admin_ui_memory_usage.png
deleted file mode 100644
index ffc2c515616..00000000000
Binary files a/src/current/images/v2.1/admin_ui_memory_usage.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_memory_usage_new.png b/src/current/images/v2.1/admin_ui_memory_usage_new.png
deleted file mode 100644
index 97ae93e1b8e..00000000000
Binary files a/src/current/images/v2.1/admin_ui_memory_usage_new.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_network_bytes_received.png b/src/current/images/v2.1/admin_ui_network_bytes_received.png
deleted file mode 100644
index e9a274dc793..00000000000
Binary files a/src/current/images/v2.1/admin_ui_network_bytes_received.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_network_bytes_sent.png b/src/current/images/v2.1/admin_ui_network_bytes_sent.png
deleted file mode 100644
index 2eb35a43222..00000000000
Binary files a/src/current/images/v2.1/admin_ui_network_bytes_sent.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_node_count.png b/src/current/images/v2.1/admin_ui_node_count.png
deleted file mode 100644
index d5c103fc868..00000000000
Binary files a/src/current/images/v2.1/admin_ui_node_count.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_nodes_page.png b/src/current/images/v2.1/admin_ui_nodes_page.png
deleted file mode 100644
index 495ff14eea0..00000000000
Binary files a/src/current/images/v2.1/admin_ui_nodes_page.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_overview_dashboard.png b/src/current/images/v2.1/admin_ui_overview_dashboard.png
deleted file mode 100644
index c2adcbf0c83..00000000000
Binary files a/src/current/images/v2.1/admin_ui_overview_dashboard.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_ranges.png b/src/current/images/v2.1/admin_ui_ranges.png
deleted file mode 100644
index 316186bb4a3..00000000000
Binary files a/src/current/images/v2.1/admin_ui_ranges.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_replica_quiescence.png b/src/current/images/v2.1/admin_ui_replica_quiescence.png
deleted file mode 100644
index 663dbfb097e..00000000000
Binary files a/src/current/images/v2.1/admin_ui_replica_quiescence.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_replica_snapshots.png b/src/current/images/v2.1/admin_ui_replica_snapshots.png
deleted file mode 100644
index 56146c7f775..00000000000
Binary files a/src/current/images/v2.1/admin_ui_replica_snapshots.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_replicas_migration.png b/src/current/images/v2.1/admin_ui_replicas_migration.png
deleted file mode 100644
index 6e08c5a3a5b..00000000000
Binary files a/src/current/images/v2.1/admin_ui_replicas_migration.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_replicas_migration2.png b/src/current/images/v2.1/admin_ui_replicas_migration2.png
deleted file mode 100644
index f7183689f20..00000000000
Binary files a/src/current/images/v2.1/admin_ui_replicas_migration2.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_replicas_migration3.png b/src/current/images/v2.1/admin_ui_replicas_migration3.png
deleted file mode 100644
index b7d9fd39760..00000000000
Binary files a/src/current/images/v2.1/admin_ui_replicas_migration3.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_replicas_per_node.png b/src/current/images/v2.1/admin_ui_replicas_per_node.png
deleted file mode 100644
index a6a662c6f32..00000000000
Binary files a/src/current/images/v2.1/admin_ui_replicas_per_node.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_replicas_per_store.png b/src/current/images/v2.1/admin_ui_replicas_per_store.png
deleted file mode 100644
index 2036c392fc8..00000000000
Binary files a/src/current/images/v2.1/admin_ui_replicas_per_store.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_service_latency_99_percentile.png b/src/current/images/v2.1/admin_ui_service_latency_99_percentile.png
deleted file mode 100644
index 7e14805d21d..00000000000
Binary files a/src/current/images/v2.1/admin_ui_service_latency_99_percentile.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_sql_byte_traffic.png b/src/current/images/v2.1/admin_ui_sql_byte_traffic.png
deleted file mode 100644
index 9f077b25259..00000000000
Binary files a/src/current/images/v2.1/admin_ui_sql_byte_traffic.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_sql_connections.png b/src/current/images/v2.1/admin_ui_sql_connections.png
deleted file mode 100644
index 7cda5614e49..00000000000
Binary files a/src/current/images/v2.1/admin_ui_sql_connections.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_sql_queries.png b/src/current/images/v2.1/admin_ui_sql_queries.png
deleted file mode 100644
index 771c995256e..00000000000
Binary files a/src/current/images/v2.1/admin_ui_sql_queries.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_sql_query_errors.png b/src/current/images/v2.1/admin_ui_sql_query_errors.png
deleted file mode 100644
index 6dfe71291f3..00000000000
Binary files a/src/current/images/v2.1/admin_ui_sql_query_errors.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_statements_details_page.png b/src/current/images/v2.1/admin_ui_statements_details_page.png
deleted file mode 100644
index dfa16481bee..00000000000
Binary files a/src/current/images/v2.1/admin_ui_statements_details_page.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_summary_panel.png b/src/current/images/v2.1/admin_ui_summary_panel.png
deleted file mode 100644
index 5eaa9b18439..00000000000
Binary files a/src/current/images/v2.1/admin_ui_summary_panel.png and /dev/null differ
diff --git a/src/current/images/v2.1/admin_ui_transactions.png b/src/current/images/v2.1/admin_ui_transactions.png
deleted file mode 100644
index 5131ecc6b2d..00000000000
Binary files a/src/current/images/v2.1/admin_ui_transactions.png and /dev/null differ
diff --git a/src/current/images/v2.1/after-decommission1.png b/src/current/images/v2.1/after-decommission1.png
deleted file mode 100644
index 945ec05f974..00000000000
Binary files a/src/current/images/v2.1/after-decommission1.png and /dev/null differ
diff --git a/src/current/images/v2.1/after-decommission2.png b/src/current/images/v2.1/after-decommission2.png
deleted file mode 100644
index fbb041d2c14..00000000000
Binary files a/src/current/images/v2.1/after-decommission2.png and /dev/null differ
diff --git a/src/current/images/v2.1/automated-operations1.png b/src/current/images/v2.1/automated-operations1.png
deleted file mode 100644
index 64c6e51616c..00000000000
Binary files a/src/current/images/v2.1/automated-operations1.png and /dev/null differ
diff --git a/src/current/images/v2.1/before-decommission1.png b/src/current/images/v2.1/before-decommission1.png
deleted file mode 100644
index 91627545b22..00000000000
Binary files a/src/current/images/v2.1/before-decommission1.png and /dev/null differ
diff --git a/src/current/images/v2.1/before-decommission2.png b/src/current/images/v2.1/before-decommission2.png
deleted file mode 100644
index 063efeb6326..00000000000
Binary files a/src/current/images/v2.1/before-decommission2.png and /dev/null differ
diff --git a/src/current/images/v2.1/cloudformation_admin_ui_live_node_count.png b/src/current/images/v2.1/cloudformation_admin_ui_live_node_count.png
deleted file mode 100644
index fce52a39034..00000000000
Binary files a/src/current/images/v2.1/cloudformation_admin_ui_live_node_count.png and /dev/null differ
diff --git a/src/current/images/v2.1/cloudformation_admin_ui_replicas.png b/src/current/images/v2.1/cloudformation_admin_ui_replicas.png
deleted file mode 100644
index 9327b1004e4..00000000000
Binary files a/src/current/images/v2.1/cloudformation_admin_ui_replicas.png and /dev/null differ
diff --git a/src/current/images/v2.1/cloudformation_admin_ui_sql_queries.png b/src/current/images/v2.1/cloudformation_admin_ui_sql_queries.png
deleted file mode 100644
index 843d94b30f0..00000000000
Binary files a/src/current/images/v2.1/cloudformation_admin_ui_sql_queries.png and /dev/null differ
diff --git a/src/current/images/v2.1/cluster-status-after-decommission1.png b/src/current/images/v2.1/cluster-status-after-decommission1.png
deleted file mode 100644
index 35d96fef0d5..00000000000
Binary files a/src/current/images/v2.1/cluster-status-after-decommission1.png and /dev/null differ
diff --git a/src/current/images/v2.1/cluster-status-after-decommission2.png b/src/current/images/v2.1/cluster-status-after-decommission2.png
deleted file mode 100644
index e420e202aa6..00000000000
Binary files a/src/current/images/v2.1/cluster-status-after-decommission2.png and /dev/null differ
diff --git a/src/current/images/v2.1/decommission-multiple1.png b/src/current/images/v2.1/decommission-multiple1.png
deleted file mode 100644
index 30c90280f7c..00000000000
Binary files a/src/current/images/v2.1/decommission-multiple1.png and /dev/null differ
diff --git a/src/current/images/v2.1/decommission-multiple2.png b/src/current/images/v2.1/decommission-multiple2.png
deleted file mode 100644
index d93abcd4acb..00000000000
Binary files a/src/current/images/v2.1/decommission-multiple2.png and /dev/null differ
diff --git a/src/current/images/v2.1/decommission-multiple3.png b/src/current/images/v2.1/decommission-multiple3.png
deleted file mode 100644
index 3a1d17176de..00000000000
Binary files a/src/current/images/v2.1/decommission-multiple3.png and /dev/null differ
diff --git a/src/current/images/v2.1/decommission-multiple4.png b/src/current/images/v2.1/decommission-multiple4.png
deleted file mode 100644
index 854c4ba50c9..00000000000
Binary files a/src/current/images/v2.1/decommission-multiple4.png and /dev/null differ
diff --git a/src/current/images/v2.1/decommission-multiple5.png b/src/current/images/v2.1/decommission-multiple5.png
deleted file mode 100644
index 3a8621e956b..00000000000
Binary files a/src/current/images/v2.1/decommission-multiple5.png and /dev/null differ
diff --git a/src/current/images/v2.1/decommission-multiple6.png b/src/current/images/v2.1/decommission-multiple6.png
deleted file mode 100644
index 168ba907be1..00000000000
Binary files a/src/current/images/v2.1/decommission-multiple6.png and /dev/null differ
diff --git a/src/current/images/v2.1/decommission-multiple7.png b/src/current/images/v2.1/decommission-multiple7.png
deleted file mode 100644
index a52d034cf9a..00000000000
Binary files a/src/current/images/v2.1/decommission-multiple7.png and /dev/null differ
diff --git a/src/current/images/v2.1/decommission-scenario1.1.png b/src/current/images/v2.1/decommission-scenario1.1.png
deleted file mode 100644
index a66389270de..00000000000
Binary files a/src/current/images/v2.1/decommission-scenario1.1.png and /dev/null differ
diff --git a/src/current/images/v2.1/decommission-scenario1.2.png b/src/current/images/v2.1/decommission-scenario1.2.png
deleted file mode 100644
index 9b33855e101..00000000000
Binary files a/src/current/images/v2.1/decommission-scenario1.2.png and /dev/null differ
diff --git a/src/current/images/v2.1/decommission-scenario1.3.png b/src/current/images/v2.1/decommission-scenario1.3.png
deleted file mode 100644
index 4c1175d956b..00000000000
Binary files a/src/current/images/v2.1/decommission-scenario1.3.png and /dev/null differ
diff --git a/src/current/images/v2.1/decommission-scenario2.1.png b/src/current/images/v2.1/decommission-scenario2.1.png
deleted file mode 100644
index 2fa8790c556..00000000000
Binary files a/src/current/images/v2.1/decommission-scenario2.1.png and /dev/null differ
diff --git a/src/current/images/v2.1/decommission-scenario2.2.png b/src/current/images/v2.1/decommission-scenario2.2.png
deleted file mode 100644
index 391b8e24c0f..00000000000
Binary files a/src/current/images/v2.1/decommission-scenario2.2.png and /dev/null differ
diff --git a/src/current/images/v2.1/decommission-scenario3.1.png b/src/current/images/v2.1/decommission-scenario3.1.png
deleted file mode 100644
index db682df3d78..00000000000
Binary files a/src/current/images/v2.1/decommission-scenario3.1.png and /dev/null differ
diff --git a/src/current/images/v2.1/decommission-scenario3.2.png b/src/current/images/v2.1/decommission-scenario3.2.png
deleted file mode 100644
index 3571bd0b83e..00000000000
Binary files a/src/current/images/v2.1/decommission-scenario3.2.png and /dev/null differ
diff --git a/src/current/images/v2.1/decommission-scenario3.3.png b/src/current/images/v2.1/decommission-scenario3.3.png
deleted file mode 100644
index 45f61d9bd18..00000000000
Binary files a/src/current/images/v2.1/decommission-scenario3.3.png and /dev/null differ
diff --git a/src/current/images/v2.1/explain-analyze-distsql-plan.png b/src/current/images/v2.1/explain-analyze-distsql-plan.png
deleted file mode 100644
index d0f0371520c..00000000000
Binary files a/src/current/images/v2.1/explain-analyze-distsql-plan.png and /dev/null differ
diff --git a/src/current/images/v2.1/explain-distsql-plan.png b/src/current/images/v2.1/explain-distsql-plan.png
deleted file mode 100644
index 77a6699cca4..00000000000
Binary files a/src/current/images/v2.1/explain-distsql-plan.png and /dev/null differ
diff --git a/src/current/images/v2.1/follow-workload-1.png b/src/current/images/v2.1/follow-workload-1.png
deleted file mode 100644
index a58fcb2e5ed..00000000000
Binary files a/src/current/images/v2.1/follow-workload-1.png and /dev/null differ
diff --git a/src/current/images/v2.1/follow-workload-2.png b/src/current/images/v2.1/follow-workload-2.png
deleted file mode 100644
index 47d83c5d4d6..00000000000
Binary files a/src/current/images/v2.1/follow-workload-2.png and /dev/null differ
diff --git a/src/current/images/v2.1/icon_info.svg b/src/current/images/v2.1/icon_info.svg
deleted file mode 100644
index 57aac994733..00000000000
--- a/src/current/images/v2.1/icon_info.svg
+++ /dev/null
@@ -1,4 +0,0 @@
-
\ No newline at end of file
diff --git a/src/current/images/v2.1/perf_tuning_concepts1.png b/src/current/images/v2.1/perf_tuning_concepts1.png
deleted file mode 100644
index 3a086a41c26..00000000000
Binary files a/src/current/images/v2.1/perf_tuning_concepts1.png and /dev/null differ
diff --git a/src/current/images/v2.1/perf_tuning_concepts2.png b/src/current/images/v2.1/perf_tuning_concepts2.png
deleted file mode 100644
index d67b8f253f8..00000000000
Binary files a/src/current/images/v2.1/perf_tuning_concepts2.png and /dev/null differ
diff --git a/src/current/images/v2.1/perf_tuning_concepts3.png b/src/current/images/v2.1/perf_tuning_concepts3.png
deleted file mode 100644
index 46d666be55d..00000000000
Binary files a/src/current/images/v2.1/perf_tuning_concepts3.png and /dev/null differ
diff --git a/src/current/images/v2.1/perf_tuning_concepts4.png b/src/current/images/v2.1/perf_tuning_concepts4.png
deleted file mode 100644
index b60b19e01bf..00000000000
Binary files a/src/current/images/v2.1/perf_tuning_concepts4.png and /dev/null differ
diff --git a/src/current/images/v2.1/perf_tuning_movr_schema.png b/src/current/images/v2.1/perf_tuning_movr_schema.png
deleted file mode 100644
index 262adc18b75..00000000000
Binary files a/src/current/images/v2.1/perf_tuning_movr_schema.png and /dev/null differ
diff --git a/src/current/images/v2.1/perf_tuning_multi_region_rebalancing.png b/src/current/images/v2.1/perf_tuning_multi_region_rebalancing.png
deleted file mode 100644
index 7064e3962db..00000000000
Binary files a/src/current/images/v2.1/perf_tuning_multi_region_rebalancing.png and /dev/null differ
diff --git a/src/current/images/v2.1/perf_tuning_multi_region_rebalancing_after_partitioning.png b/src/current/images/v2.1/perf_tuning_multi_region_rebalancing_after_partitioning.png
deleted file mode 100644
index 433c0f8ba03..00000000000
Binary files a/src/current/images/v2.1/perf_tuning_multi_region_rebalancing_after_partitioning.png and /dev/null differ
diff --git a/src/current/images/v2.1/perf_tuning_multi_region_topology.png b/src/current/images/v2.1/perf_tuning_multi_region_topology.png
deleted file mode 100644
index fe64c322ca0..00000000000
Binary files a/src/current/images/v2.1/perf_tuning_multi_region_topology.png and /dev/null differ
diff --git a/src/current/images/v2.1/perf_tuning_single_region_topology.png b/src/current/images/v2.1/perf_tuning_single_region_topology.png
deleted file mode 100644
index 4dfca364929..00000000000
Binary files a/src/current/images/v2.1/perf_tuning_single_region_topology.png and /dev/null differ
diff --git a/src/current/images/v2.1/raw-status-endpoints.png b/src/current/images/v2.1/raw-status-endpoints.png
deleted file mode 100644
index a893911fa87..00000000000
Binary files a/src/current/images/v2.1/raw-status-endpoints.png and /dev/null differ
diff --git a/src/current/images/v2.1/recovery1.png b/src/current/images/v2.1/recovery1.png
deleted file mode 100644
index 8a14f7e965a..00000000000
Binary files a/src/current/images/v2.1/recovery1.png and /dev/null differ
diff --git a/src/current/images/v2.1/recovery2.png b/src/current/images/v2.1/recovery2.png
deleted file mode 100644
index 7ec3fed2adc..00000000000
Binary files a/src/current/images/v2.1/recovery2.png and /dev/null differ
diff --git a/src/current/images/v2.1/recovery3.png b/src/current/images/v2.1/recovery3.png
deleted file mode 100644
index a82da79f64a..00000000000
Binary files a/src/current/images/v2.1/recovery3.png and /dev/null differ
diff --git a/src/current/images/v2.1/remove-dead-node1.png b/src/current/images/v2.1/remove-dead-node1.png
deleted file mode 100644
index 26569078efd..00000000000
Binary files a/src/current/images/v2.1/remove-dead-node1.png and /dev/null differ
diff --git a/src/current/images/v2.1/replication1.png b/src/current/images/v2.1/replication1.png
deleted file mode 100644
index 303fa425835..00000000000
Binary files a/src/current/images/v2.1/replication1.png and /dev/null differ
diff --git a/src/current/images/v2.1/replication2.png b/src/current/images/v2.1/replication2.png
deleted file mode 100644
index b218066c81d..00000000000
Binary files a/src/current/images/v2.1/replication2.png and /dev/null differ
diff --git a/src/current/images/v2.1/scalability1.png b/src/current/images/v2.1/scalability1.png
deleted file mode 100644
index a60944d05d3..00000000000
Binary files a/src/current/images/v2.1/scalability1.png and /dev/null differ
diff --git a/src/current/images/v2.1/scalability2.png b/src/current/images/v2.1/scalability2.png
deleted file mode 100644
index c8dfd8c0574..00000000000
Binary files a/src/current/images/v2.1/scalability2.png and /dev/null differ
diff --git a/src/current/images/v2.1/serializable_schema.png b/src/current/images/v2.1/serializable_schema.png
deleted file mode 100644
index 7e8b4e324c6..00000000000
Binary files a/src/current/images/v2.1/serializable_schema.png and /dev/null differ
diff --git a/src/current/images/v2.1/trace.png b/src/current/images/v2.1/trace.png
deleted file mode 100644
index 4f0fb98a753..00000000000
Binary files a/src/current/images/v2.1/trace.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-1.1.png b/src/current/images/v2.1/training-1.1.png
deleted file mode 100644
index d1adf35bcde..00000000000
Binary files a/src/current/images/v2.1/training-1.1.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-1.2.png b/src/current/images/v2.1/training-1.2.png
deleted file mode 100644
index 1993355b08e..00000000000
Binary files a/src/current/images/v2.1/training-1.2.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-1.png b/src/current/images/v2.1/training-1.png
deleted file mode 100644
index 9f8de513337..00000000000
Binary files a/src/current/images/v2.1/training-1.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-10.png b/src/current/images/v2.1/training-10.png
deleted file mode 100644
index b319a5bf490..00000000000
Binary files a/src/current/images/v2.1/training-10.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-11.png b/src/current/images/v2.1/training-11.png
deleted file mode 100644
index b1016d0ce37..00000000000
Binary files a/src/current/images/v2.1/training-11.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-12.png b/src/current/images/v2.1/training-12.png
deleted file mode 100644
index 7a8e4cd8e05..00000000000
Binary files a/src/current/images/v2.1/training-12.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-13.png b/src/current/images/v2.1/training-13.png
deleted file mode 100644
index fc870143136..00000000000
Binary files a/src/current/images/v2.1/training-13.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-14.png b/src/current/images/v2.1/training-14.png
deleted file mode 100644
index fe517518ed7..00000000000
Binary files a/src/current/images/v2.1/training-14.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-15.png b/src/current/images/v2.1/training-15.png
deleted file mode 100644
index 1879ee29d2e..00000000000
Binary files a/src/current/images/v2.1/training-15.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-16.png b/src/current/images/v2.1/training-16.png
deleted file mode 100644
index 24f6fa3d908..00000000000
Binary files a/src/current/images/v2.1/training-16.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-17.png b/src/current/images/v2.1/training-17.png
deleted file mode 100644
index 9bb5c8a46dd..00000000000
Binary files a/src/current/images/v2.1/training-17.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-18.png b/src/current/images/v2.1/training-18.png
deleted file mode 100644
index 8f0ae7aa857..00000000000
Binary files a/src/current/images/v2.1/training-18.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-19.png b/src/current/images/v2.1/training-19.png
deleted file mode 100644
index e1a2414bf29..00000000000
Binary files a/src/current/images/v2.1/training-19.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-2.png b/src/current/images/v2.1/training-2.png
deleted file mode 100644
index d6d8afd7828..00000000000
Binary files a/src/current/images/v2.1/training-2.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-20.png b/src/current/images/v2.1/training-20.png
deleted file mode 100644
index d55c4f249ae..00000000000
Binary files a/src/current/images/v2.1/training-20.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-21.png b/src/current/images/v2.1/training-21.png
deleted file mode 100644
index 5726c9c69a7..00000000000
Binary files a/src/current/images/v2.1/training-21.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-22.png b/src/current/images/v2.1/training-22.png
deleted file mode 100644
index fe2ca336a95..00000000000
Binary files a/src/current/images/v2.1/training-22.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-23.png b/src/current/images/v2.1/training-23.png
deleted file mode 100644
index de87538279f..00000000000
Binary files a/src/current/images/v2.1/training-23.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-3.png b/src/current/images/v2.1/training-3.png
deleted file mode 100644
index 02b5724da59..00000000000
Binary files a/src/current/images/v2.1/training-3.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-4.png b/src/current/images/v2.1/training-4.png
deleted file mode 100644
index ae55051e60e..00000000000
Binary files a/src/current/images/v2.1/training-4.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-5.png b/src/current/images/v2.1/training-5.png
deleted file mode 100644
index 65c805404c4..00000000000
Binary files a/src/current/images/v2.1/training-5.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-6.1.png b/src/current/images/v2.1/training-6.1.png
deleted file mode 100644
index 128ab631ce8..00000000000
Binary files a/src/current/images/v2.1/training-6.1.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-6.png b/src/current/images/v2.1/training-6.png
deleted file mode 100644
index 8d93f4c3e3d..00000000000
Binary files a/src/current/images/v2.1/training-6.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-7.png b/src/current/images/v2.1/training-7.png
deleted file mode 100644
index 46179bfd04b..00000000000
Binary files a/src/current/images/v2.1/training-7.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-8.png b/src/current/images/v2.1/training-8.png
deleted file mode 100644
index d31f2e95a29..00000000000
Binary files a/src/current/images/v2.1/training-8.png and /dev/null differ
diff --git a/src/current/images/v2.1/training-9.png b/src/current/images/v2.1/training-9.png
deleted file mode 100644
index f386b9a9aa7..00000000000
Binary files a/src/current/images/v2.1/training-9.png and /dev/null differ
diff --git a/src/current/images/v2.1/window-functions.png b/src/current/images/v2.1/window-functions.png
deleted file mode 100644
index 887ceeac669..00000000000
Binary files a/src/current/images/v2.1/window-functions.png and /dev/null differ
diff --git a/src/current/images/v22.2/locality-aware-backups.png b/src/current/images/v22.2/locality-aware-backups.png
new file mode 100644
index 00000000000..8b2f1d79859
Binary files /dev/null and b/src/current/images/v22.2/locality-aware-backups.png differ
diff --git a/src/current/releases/v2.1.md b/src/current/releases/v2.1.md
index 6b740b1a5c7..b48d1ffbde2 100644
--- a/src/current/releases/v2.1.md
+++ b/src/current/releases/v2.1.md
@@ -8,16 +8,34 @@ docs_area: releases
keywords: gin, gin index, gin indexes, inverted index, inverted indexes, accelerated index, accelerated indexes
---
-{% assign rel = site.data.releases | where_exp: "rel", "rel.major_version == page.major_version" | sort: "release_date" | reverse %}
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
-{% assign vers = site.data.versions | where_exp: "vers", "vers.major_version == page.major_version" | first %}
+This release is no longer supported. For more information, see our [Release support policy]({% link releases/release-support-policy.md %}).
-{% assign today = "today" | date: "%Y-%m-%d" %}
-
-{% include releases/testing-release-notice.md major_version=vers %}
-
-{% include releases/whats-new-intro.md major_version=vers %}
-
-{% for r in rel %}
-{% include releases/{{ page.major_version }}/{{ r.release_name }}.md release=r.release_name release_date=r.release_date %}
-{% endfor %}
+To download the archived documentation for this release, see [Archived Documentation]({% link releases/archived-documentation.md %}).
diff --git a/src/current/v19.1/dbeaver.md b/src/current/v19.1/dbeaver.md
index aa9f898fefe..501f404c79f 100644
--- a/src/current/v19.1/dbeaver.md
+++ b/src/current/v19.1/dbeaver.md
@@ -29,17 +29,17 @@ To work through this tutorial, take the following steps:
Start DBeaver, and select **Database > New Connection** from the menu. In the dialog that appears, select **CockroachDB** from the list.
-
+
## Step 2. Update the connection settings
On the **Create new connection** dialog that appears, click **Network settings**.
-
+
From the network settings, click the **SSL** tab. It will look like the screenshot below.
-
+
Check the **Use SSL** checkbox as shown, and fill in the text areas as follows:
@@ -57,13 +57,13 @@ Select **require** from the **SSL mode** dropdown. There is no need to set the
Click **Test Connection ...**. If everything worked, you will see a **Success** dialog like the one shown below.
-
+
## Step 4. Start using DBeaver
Click **Finish** to get started using DBeaver with CockroachDB.
-
+
For more information about using DBeaver, see the [DBeaver documentation](https://dbeaver.io/docs/).
diff --git a/src/current/v19.1/intellij-idea.md b/src/current/v19.1/intellij-idea.md
index b1845a9539d..759dc3ad1d5 100644
--- a/src/current/v19.1/intellij-idea.md
+++ b/src/current/v19.1/intellij-idea.md
@@ -33,7 +33,7 @@ Users can expect to encounter the following behaviors when using CockroachDB wit
##### [XXUUU] ERROR: could not decorrelate subquery...
-
+
Displays once per load of schema.
@@ -41,7 +41,7 @@ Displays once per load of schema.
##### [42883] ERROR: unknown function: pg_function_is_visible() Failed to retrieve...
-
+
Display periodically. Does not impact functionality.
@@ -49,7 +49,7 @@ Display periodically. Does not impact functionality.
##### [42703] org.postgresql.util.PSQLException: ERROR: column "n.xmin" does not exist
-
+
Requires setting **Introspect using JDBC metadata** ([details below](#set-cockroachdb-as-a-data-source-in-intellij)).
@@ -57,8 +57,8 @@ Requires setting **Introspect using JDBC metadata** ([details below](#set-cockro
## Set CockroachDB as a Data Source in IntelliJ
-1. Launch the **Database** tool window. (**View** > **Tool Windows** > **Database**)
-1. Add a PostgreSQL data source. (**New (+)** > **Data Source** > **PostgreSQL**)
+1. Launch the **Database** tool window. (**View** > **Tool Windows** > **Database**)
+1. Add a PostgreSQL data source. (**New (+)** > **Data Source** > **PostgreSQL**)
1. On the **General** tab, enter your database's connection string:
Field | Value
@@ -70,10 +70,10 @@ Requires setting **Introspect using JDBC metadata** ([details below](#set-cockro
**Password** | If your cluster uses password authentication, enter the password.
**Driver** | Select or install **PostgreSQL** using a version greater than or equal to 41.1. (Older drivers have not been tested.)
-
+
1. Install or select a **PostgreSQL** driver. We recommend a version greater than or equal to 41.1.
1. If your cluster uses SSL authentication, go to the **SSH/SSL** tab, select **Use SSL** and provide the location of your certificate files.
-1. Go to the **Options** tab, and then select **Introspect using JDBC metadata**.
+1. Go to the **Options** tab, and then select **Introspect using JDBC metadata**.
1. Click **OK**.
You can now use IntelliJ's [database tool window](https://www.jetbrains.com/help/idea/working-with-the-database-tool-window.html) to interact with your CockroachDB cluster.
diff --git a/src/current/v19.1/upgrade-cockroach-version.md b/src/current/v19.1/upgrade-cockroach-version.md
index 6a23e9e37e6..24758e3f5f1 100644
--- a/src/current/v19.1/upgrade-cockroach-version.md
+++ b/src/current/v19.1/upgrade-cockroach-version.md
@@ -13,7 +13,7 @@ To upgrade to a new version, you must first be on a [production release](../rele
Therefore, if you are upgrading from v2.0 to v19.1, or from a testing release (alpha/beta) of v2.1 to v19.1:
-1. First [upgrade to a production release of v2.1](../v2.1/upgrade-cockroach-version.html). Be sure to complete all the steps.
+1. First [upgrade to a production release of v2.1](https://www.cockroachlabs.com/docs/stable/upgrade-cockroach-version.html). Be sure to complete all the steps.
2. Then return to this page and perform a second rolling upgrade to v19.1.
@@ -47,7 +47,7 @@ This step is relevant only when upgrading from v2.1.x to v19.1. For upgrades wit
By default, after all nodes are running the new version, the upgrade process will be **auto-finalized**. This will enable certain [features and performance improvements introduced in v19.1](#features-that-require-upgrade-finalization). However, it will no longer be possible to perform a downgrade to v2.1. In the event of a catastrophic failure or corruption, the only option will be to start a new cluster using the old binary and then restore from one of the backups created prior to performing the upgrade. For this reason, **we recommend disabling auto-finalization** so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade, but note that you will need to follow all of the subsequent directions, including the manual finalization in [step 5](#step-5-finish-the-upgrade):
-1. [Upgrade to v2.1](../v2.1/upgrade-cockroach-version.html), if you haven't already.
+1. [Upgrade to v2.1](https://www.cockroachlabs.com/docs/stable/upgrade-cockroach-version.html), if you haven't already.
2. Start the [`cockroach sql`](use-the-built-in-sql-client.html) shell against any node in the cluster.
diff --git a/src/current/v19.1/use-the-built-in-sql-client.md b/src/current/v19.1/use-the-built-in-sql-client.md
index 62cc8572d9d..0119b272d71 100644
--- a/src/current/v19.1/use-the-built-in-sql-client.md
+++ b/src/current/v19.1/use-the-built-in-sql-client.md
@@ -187,7 +187,7 @@ See also:
INSERT
UPSERT
DELETE
- https://www.cockroachlabs.com/docs/v2.1/update.html
+ https://www.cockroachlabs.com/docs/stable/update.html
~~~
~~~ sql
@@ -203,7 +203,7 @@ Signature Category
uuid_v4() -> bytes [ID Generation]
See also:
- https://www.cockroachlabs.com/docs/v2.1/functions-and-operators.html
+ https://www.cockroachlabs.com/docs/stable/functions-and-operators.html
~~~
### Shortcuts
diff --git a/src/current/v2.0/build-a-java-app-with-cockroachdb.md b/src/current/v2.0/build-a-java-app-with-cockroachdb.md
index c4c2768c2f4..cdf240d8942 100644
--- a/src/current/v2.0/build-a-java-app-with-cockroachdb.md
+++ b/src/current/v2.0/build-a-java-app-with-cockroachdb.md
@@ -124,7 +124,9 @@ To run it:
account 2: 350
~~~
-{% include v2.1/client-transaction-retry.md %}
+{{site.data.alerts.callout_info}}
+With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic.
+{{site.data.alerts.end}}
{% include copy-clipboard.html %}
~~~ java
@@ -220,7 +222,9 @@ To run it:
$ java -classpath .:/path/to/postgresql.jar TxnSample
~~~
-{% include v2.1/client-transaction-retry.md %}
+{{site.data.alerts.callout_info}}
+With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic.
+{{site.data.alerts.end}}
{% include copy-clipboard.html %}
~~~ java
diff --git a/src/current/v2.0/build-a-nodejs-app-with-cockroachdb.md b/src/current/v2.0/build-a-nodejs-app-with-cockroachdb.md
index 0a89b7bdff6..25815cdb64f 100644
--- a/src/current/v2.0/build-a-nodejs-app-with-cockroachdb.md
+++ b/src/current/v2.0/build-a-nodejs-app-with-cockroachdb.md
@@ -86,7 +86,9 @@ Next, use the following code to again connect as the `maxroach` user but this ti
Download the [`txn-sample.js`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/txn-sample.js) file, or create the file yourself and copy the code into it.
-{% include v2.1/client-transaction-retry.md %}
+{{site.data.alerts.callout_info}}
+With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic.
+{{site.data.alerts.end}}
{% include copy-clipboard.html %}
~~~ js
@@ -176,7 +178,9 @@ Next, use the following code to again connect as the `maxroach` user but this ti
Download the [`txn-sample.js`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v2.0/app/insecure/txn-sample.js) file, or create the file yourself and copy the code into it.
-{% include v2.1/client-transaction-retry.md %}
+{{site.data.alerts.callout_info}}
+With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic.
+{{site.data.alerts.end}}
{% include copy-clipboard.html %}
~~~ js
diff --git a/src/current/v2.0/build-a-ruby-app-with-cockroachdb.md b/src/current/v2.0/build-a-ruby-app-with-cockroachdb.md
index 03b9648caee..f461cee9b94 100644
--- a/src/current/v2.0/build-a-ruby-app-with-cockroachdb.md
+++ b/src/current/v2.0/build-a-ruby-app-with-cockroachdb.md
@@ -78,7 +78,9 @@ Next, use the following code to again connect as the `maxroach` user but this ti
Download the txn-sample.rb file, or create the file yourself and copy the code into it.
-{% include v2.1/client-transaction-retry.md %}
+{{site.data.alerts.callout_info}}
+With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic.
+{{site.data.alerts.end}}
{% include copy-clipboard.html %}
~~~ ruby
@@ -160,7 +162,9 @@ Next, use the following code to again connect as the `maxroach` user but this ti
Download the txn-sample.rb file, or create the file yourself and copy the code into it.
-{% include v2.1/client-transaction-retry.md %}
+{{site.data.alerts.callout_info}}
+With the default `SERIALIZABLE` isolation level, CockroachDB may require the client to retry a transaction in case of read/write contention. The code sample below shows how to implement retry logic.
+{{site.data.alerts.end}}
{% include copy-clipboard.html %}
~~~ ruby
diff --git a/src/current/v2.1/404.md b/src/current/v2.1/404.md
deleted file mode 100755
index 13a69ddde5c..00000000000
--- a/src/current/v2.1/404.md
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title: Page Not Found
-description: "Page not found."
-sitemap: false
-search: exclude
-related_pages: none
-toc: false
----
-
-
-{%comment%}
-
-
-{%endcomment%}
\ No newline at end of file
diff --git a/src/current/v2.1/add-column.md b/src/current/v2.1/add-column.md
deleted file mode 100644
index 1419cfbfcda..00000000000
--- a/src/current/v2.1/add-column.md
+++ /dev/null
@@ -1,148 +0,0 @@
----
-title: ADD COLUMN
-summary: Use the ADD COLUMN statement to add columns to tables.
-toc: true
----
-
-The `ADD COLUMN` [statement](sql-statements.html) is part of `ALTER TABLE` and adds columns to tables.
-
-## Synopsis
-
-
-{% include {{ page.version.version }}/sql/diagrams/add_column.html %}
-
-
-## Required privileges
-
-The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the table.
-
-## Parameters
-
- Parameter | Description
------------|-------------
- `table_name` | The name of the table to which you want to add the column.
- `column_name` | The name of the column you want to add. The column name must follow these [identifier rules](keywords-and-identifiers.html#identifiers) and must be unique within the table but can have the same name as indexes or constraints.
- `typename` | The [data type](data-types.html) of the new column.
- `col_qualification` | An optional list of column definitions, which may include [column-level constraints](constraints.html), [collation](collate.html), or [column family assignments](column-families.html).
If the column family is not specified, the column will be added to the first column family. For more information about how column families are assigned, see [Column Families](column-families.html#assign-column-families-when-adding-columns).
Note that it is not possible to add a column with the [foreign key](foreign-key.html) constraint. As a workaround, you can add the column without the constraint, then use [`CREATE INDEX`](create-index.html) to index the column, and then use [`ADD CONSTRAINT`](add-constraint.html) to add the foreign key constraint to the column.
-
-## Viewing schema changes
-
-{% include {{ page.version.version }}/misc/schema-change-view-job.md %}
-
-## Examples
-
-### Add a single column
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE accounts ADD COLUMN names STRING;
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW COLUMNS FROM accounts;
-~~~
-
-~~~
-+-------------+-----------+-------------+----------------+-----------------------+-------------+
-| column_name | data_type | is_nullable | column_default | generation_expression | indices |
-+-------------+-----------+-------------+----------------+-----------------------+-------------+
-| id | INT | false | NULL | | {"primary"} |
-| balance | DECIMAL | true | NULL | | {} |
-| names | STRING | true | NULL | | {} |
-+-------------+-----------+-------------+----------------+-----------------------+-------------+
-(3 rows)
-~~~
-
-### Add multiple columns
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE accounts ADD COLUMN location STRING, ADD COLUMN amount DECIMAL;
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW COLUMNS FROM accounts;
-~~~
-
-~~~
-+-------------+-----------+-------------+----------------+-----------------------+-------------+
-| column_name | data_type | is_nullable | column_default | generation_expression | indices |
-+-------------+-----------+-------------+----------------+-----------------------+-------------+
-| id | INT | false | NULL | | {"primary"} |
-| balance | DECIMAL | true | NULL | | {} |
-| names | STRING | true | NULL | | {} |
-| location | STRING | true | NULL | | {} |
-| amount | DECIMAL | true | NULL | | {} |
-+-------------+-----------+-------------+----------------+-----------------------+-------------+
-(5 rows)
-~~~
-
-### Add a column with a `NOT NULL` constraint and a `DEFAULT` value
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE accounts ADD COLUMN interest DECIMAL NOT NULL DEFAULT (DECIMAL '1.3');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW COLUMNS FROM accounts;
-~~~
-~~~
-+-------------+-----------+-------------+------------------------+-----------------------+-------------+
-| column_name | data_type | is_nullable | column_default | generation_expression | indices |
-+-------------+-----------+-------------+------------------------+-----------------------+-------------+
-| id | INT | false | NULL | | {"primary"} |
-| balance | DECIMAL | true | NULL | | {} |
-| names | STRING | true | NULL | | {} |
-| location | STRING | true | NULL | | {} |
-| amount | DECIMAL | true | NULL | | {} |
-| interest | DECIMAL | false | 1.3:::DECIMAL::DECIMAL | | {} |
-+-------------+-----------+-------------+------------------------+-----------------------+-------------+
-(6 rows)
-~~~
-
-### Add a column with `NOT NULL` and `UNIQUE` constraints
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE accounts ADD COLUMN cust_number DECIMAL UNIQUE NOT NULL;
-~~~
-
-### Add a column with collation
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE accounts ADD COLUMN more_names STRING COLLATE en;
-~~~
-
-### Add a column and assign it to a column family
-
-#### Add a column and assign it to a new column family
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE accounts ADD COLUMN location1 STRING CREATE FAMILY new_family;
-~~~
-
-#### Add a column and assign it to an existing column family
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE accounts ADD COLUMN location2 STRING FAMILY existing_family;
-~~~
-
-#### Add a column and create a new column family if column family does not exist
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE accounts ADD COLUMN new_name STRING CREATE IF NOT EXISTS FAMILY f1;
-~~~
-
-## See also
-- [`ALTER TABLE`](alter-table.html)
-- [Column-level Constraints](constraints.html)
-- [Collation](collate.html)
-- [Column Families](column-families.html)
diff --git a/src/current/v2.1/add-constraint.md b/src/current/v2.1/add-constraint.md
deleted file mode 100644
index 744cedf49a6..00000000000
--- a/src/current/v2.1/add-constraint.md
+++ /dev/null
@@ -1,139 +0,0 @@
----
-title: ADD CONSTRAINT
-summary: Use the ADD CONSTRAINT statement to add constraints to columns.
-toc: true
----
-
-The `ADD CONSTRAINT` [statement](sql-statements.html) is part of `ALTER TABLE` and can add the following [constraints](constraints.html) to columns:
-
-- [`CHECK`](check.html)
-- [Foreign key](foreign-key.html)
-- [`UNIQUE`](unique.html)
-
-{{site.data.alerts.callout_info}}
-The [`PRIMARY KEY`](primary-key.html) and [`NOT NULL`](not-null.html) constraints can only be applied through [`CREATE TABLE`](create-table.html). The [`DEFAULT`](default-value.html) constraint is managed through [`ALTER COLUMN`](alter-column.html).
-{{site.data.alerts.end}}
-
-
-## Synopsis
-
-
-{% include {{ page.version.version }}/sql/diagrams/add_constraint.html %}
-
-
-## Required privileges
-
-The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the table.
-
-## Parameters
-
- Parameter | Description
------------|-------------
- `table_name` | The name of the table containing the column you want to constrain.
- `constraint_name` | The name of the constraint, which must be unique to its table and follow these [identifier rules](keywords-and-identifiers.html#identifiers).
- `constraint_elem` | The [`CHECK`](check.html), [foreign key](foreign-key.html), [`UNIQUE`](unique.html) constraint you want to add.
Adding/changing a `DEFAULT` constraint is done through [`ALTER COLUMN`](alter-column.html).
Adding/changing the table's `PRIMARY KEY` is not supported through `ALTER TABLE`; it can only be specified during [table creation](create-table.html#create-a-table-primary-key-defined).
-
-## Viewing schema changes
-
-{% include {{ page.version.version }}/misc/schema-change-view-job.md %}
-
-## Examples
-
-### Add the `UNIQUE` constraint
-
-Adding the [`UNIQUE` constraint](unique.html) requires that all of a column's values be distinct from one another (except for *NULL* values).
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE orders ADD CONSTRAINT id_customer_unique UNIQUE (id, customer);
-~~~
-
-### Add the `CHECK` constraint
-
-Adding the [`CHECK` constraint](check.html) requires that all of a column's values evaluate to `TRUE` for a Boolean expression.
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE orders ADD CONSTRAINT total_0_check CHECK (total > 0);
-~~~
-
-### Add the foreign key constraint with `CASCADE`
-
-Before you can add the [foreign key](foreign-key.html) constraint to columns, the columns must already be indexed. If they are not already indexed, use [`CREATE INDEX`](create-index.html) to index them and only then use the `ADD CONSTRAINT` statement to add the Foreign Key constraint to the columns.
-
-For example, let's say you have two tables, `orders` and `customers`:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW CREATE customers;
-~~~
-
-~~~
-+-----------+-------------------------------------------------+
-| Table | CreateTable |
-+-----------+-------------------------------------------------+
-| customers | CREATE TABLE customers ( |
-| | id INT NOT NULL, |
-| | "name" STRING NOT NULL, |
-| | address STRING NULL, |
-| | CONSTRAINT "primary" PRIMARY KEY (id ASC), |
-| | FAMILY "primary" (id, "name", address) |
-| | ) |
-+-----------+-------------------------------------------------+
-(1 row)
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SHOW CREATE orders;
-~~~
-
-~~~
-+--------+-------------------------------------------------------------------------------------------------------------+
-| Table | CreateTable |
-+--------+-------------------------------------------------------------------------------------------------------------+
-| orders | CREATE TABLE orders ( |
-| | id INT NOT NULL, |
-| | customer_id INT NULL, |
-| | status STRING NOT NULL, |
-| | CONSTRAINT "primary" PRIMARY KEY (id ASC), |
-| | FAMILY "primary" (id, customer_id, status), |
-| | CONSTRAINT check_status CHECK (status IN ('open':::STRING, 'complete':::STRING, 'cancelled':::STRING)) |
-| | ) |
-+--------+-------------------------------------------------------------------------------------------------------------+
-(1 row)
-~~~
-
-To ensure that each value in the `orders.customer_id` column matches a unique value in the `customers.id` column, you want to add the Foreign Key constraint to `orders.customer_id`. So you first create an index on `orders.customer_id`:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> CREATE INDEX ON orders (customer_id);
-~~~
-
-Then you add the foreign key constraint.
-
-You can include a [foreign key action](foreign-key.html#foreign-key-actions) to specify what happens when a foreign key is updated or deleted.
-
-In this example, let's use `ON DELETE CASCADE` (i.e., when referenced row is deleted, all dependent objects are also deleted).
-
-{{site.data.alerts.callout_danger}}CASCADE does not list objects it drops or updates, so it should be used cautiously.{{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE orders ADD CONSTRAINT customer_fk FOREIGN KEY (customer_id) REFERENCES customers (id) ON DELETE CASCADE;
-~~~
-
-If you had tried to add the constraint before indexing the column, you would have received an error:
-
-~~~
-pq: foreign key requires an existing index on columns ("customer_id")
-~~~
-
-## See also
-
-- [Constraints](constraints.html)
-- [Foreign Key Constraint](foreign-key.html)
-- [`ALTER COLUMN`](alter-column.html)
-- [`CREATE TABLE`](create-table.html)
-- [`ALTER TABLE`](alter-table.html)
diff --git a/src/current/v2.1/admin-ui-access-and-navigate.md b/src/current/v2.1/admin-ui-access-and-navigate.md
deleted file mode 100644
index 4e10af55da0..00000000000
--- a/src/current/v2.1/admin-ui-access-and-navigate.md
+++ /dev/null
@@ -1,122 +0,0 @@
----
-title: Use the CockroachDB Admin UI
-summary: Learn how to access and navigate the Admin UI.
-toc: true
----
-
-The built-in Admin UI helps you monitor and troubleshoot CockroachDB by providing information about the cluster's health, configuration, and operations.
-
-## Access the Admin UI
-
-For insecure clusters, anyone can access and view the Admin UI. For secure clusters, only authorized users can [access and view the Admin UI](#accessing-the-admin-ui-for-a-secure-cluster).
-
-You can access the Admin UI from any node in the cluster.
-
-The Admin UI is reachable at the IP address/hostname and port set via the `--http-addr` flag when [starting each node](start-a-node.html), for example, `http://:` for an insecure cluster or `https://:` for a secure cluster.
-
-If `--http-addr` is not specified when starting a node, the Admin UI is reachable at the IP address/hostname set via the `--listen-addr` flag and port `8080`.
-
-For additional guidance on accessing the Admin UI in the context of cluster deployment, see [Start a Local Cluster](start-a-local-cluster.html) and [Manual Deployment](manual-deployment.html).
-
-### Accessing the Admin UI for a secure cluster
-
-On [accessing the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), your browser will consider the CockroachDB-created certificate invalid, so you’ll need to click through a warning message to get to the UI. For secure clusters, you can avoid getting the warning message by using a certificate issued by a public CA. For more information, refer to [Use a UI certificate and key to access the Admin UI](create-security-certificates-custom-ca.html#accessing-the-admin-ui-for-a-secure-cluster).
-
-For each user who should have access to the Admin UI for a secure cluster, [create a user with a password](create-user.html). On accessing the Admin UI, the users will see a Login screen, where they will need to enter their usernames and passwords.
-
-{{site.data.alerts.callout_info}}
-This login information is stored in a system table that is replicated like other data in the cluster. If a majority of the nodes with the replicas of the system table data go down, users will be locked out of the Admin UI.
-{{site.data.alerts.end}}
-
-To log out of the Admin UI, click the **Log Out** link at the bottom of the left-hand navigation bar.
-
-## Navigate the Admin UI
-
-The left-hand navigation bar allows you to navigate to the [Cluster Overview page](admin-ui-access-and-navigate.html), [cluster metrics dashboards](admin-ui-overview.html), the [Databases page](admin-ui-databases-page.html), the [Statements page](admin-ui-statements-page.html), the [Jobs page](admin-ui-jobs-page.html), and the [Advanced Debugging page](admin-ui-debug-pages.html).
-
-The main panel display changes for each page:
-
-Page | Main Panel Component
------------|------------
-Cluster Overview |
-Databases | Information about the tables and grants in your [databases](admin-ui-databases-page.html).
-Statements | Information about the SQL [statements](admin-ui-statements-page.html) running in the cluster.
-Jobs | Information about all currently active schema changes and backup/restore [jobs](admin-ui-jobs-page.html).
-Advanced Debugging | Advanced monitoring and troubleshooting [reports](admin-ui-debug-pages.html). These pages are experimental. If you find an issue, let us know through [these channels](https://www.cockroachlabs.com/community/).
-
-### Cluster Metrics
-
-The **Cluster Metrics** dashboards display the time series graphs that are useful to visualize and monitor data trends. To access the time series graphs, click **Metrics** on the left.
-
-You can hover over each graph to see actual point-in-time values.
-
-
-
-{{site.data.alerts.callout_info}}
-By default, CockroachDB stores time series metrics for the last 30 days, but you can reduce the interval for timeseries storage. Alternatively, if you are exclusively using a third-party tool such as [Prometheus](monitor-cockroachdb-with-prometheus.html) for time series monitoring, you can disable time series storage entirely. For more details, see this [FAQ](operational-faqs.html#can-i-reduce-or-disable-the-storage-of-timeseries-data).
-{{site.data.alerts.end}}
-
-#### Change time range
-
-You can change the time range by clicking on the time window.
-
-
-{{site.data.alerts.callout_info}}The Admin UI shows time in UTC, even if you set a different time zone for your cluster. {{site.data.alerts.end}}
-
-#### View metrics for a single node
-
-By default, the time series panel displays the metrics for the entire cluster. To view the metrics for an individual node, select the node from the **Graph** drop-down list.
-
-
-### Summary panel
-
-The **Cluster Metrics** dashboards display the **Summary** panel of key metrics. To view the **Summary** panel, click **Metrics** on the left.
-
-
-
-The **Summary** panel provides the following metrics:
-
-Metric | Description
---------|----
-Total Nodes | The total number of nodes in the cluster. Decommissioned nodes are not included in the Total Nodes count.
You can further drill down into the nodes details by clicking on [**View nodes list**](admin-ui-cluster-overview-page.html#node-list).
-Dead Nodes | The number of [dead nodes](admin-ui-cluster-overview-page.html#dead-nodes) in the cluster.
-Capacity Used | The storage capacity used as a percentage of total storage capacity allocated across all nodes.
-Unavailable Ranges | The number of unavailable ranges in the cluster. A non-zero number indicates an unstable cluster.
-Queries per second | The number of SQL queries executed per second.
-P50 Latency | The 50th percentile of service latency. Service latency is calculated as the time between when the cluster receives a query and finishes executing the query. This time does not include returning results to the client.
-P99 Latency | The 99th percentile of service latency.
-
-{{site.data.alerts.callout_info}}
-{% include v2.1/misc/available-capacity-metric.md %}
-{{site.data.alerts.end}}
-
-### Events panel
-
-The **Cluster Metrics** dashboards display the **Events** panel that lists the 10 most recent events logged for the all nodes across the cluster. To view the **Events** panel, click **Metrics** on the left-hand navigation bar. To see the list of all events, click **View all events** in the **Events** panel.
-
-
-
-The following types of events are listed:
-
-- Database created
-- Database dropped
-- Table created
-- Table dropped
-- Table altered
-- Index created
-- Index dropped
-- View created
-- View dropped
-- Schema change reversed
-- Schema change finished
-- Node joined
-- Node decommissioned
-- Node restarted
-- Cluster setting changed
-
-## See also
-
-- [Troubleshooting Overview](troubleshooting-overview.html)
-- [Support Resources](support-resources.html)
-- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints)
diff --git a/src/current/v2.1/admin-ui-cluster-overview-page.md b/src/current/v2.1/admin-ui-cluster-overview-page.md
deleted file mode 100644
index 501c5ad17da..00000000000
--- a/src/current/v2.1/admin-ui-cluster-overview-page.md
+++ /dev/null
@@ -1,90 +0,0 @@
----
-title: Cluster Overview Page
-toc: true
----
-
-The **Cluster Overview** page of the Admin UI provides details of the cluster nodes and their liveness status, replication status, uptime, and key hardware metrics. [Enterprise users](enterprise-licensing.html) can enable and switch to the [Node Map](admin-ui-cluster-overview-page.html#node-map-enterprise) view.
-
-## Cluster Overview Panel
-
-
-
-The **Cluster Overview** panel provides the following metrics:
-
-Metric | Description
---------|----
-Capacity Usage |
Used capacity: The storage capacity used by CockroachDB (represented as a percentage of total storage capacity allocated across all nodes).
Usable capacity: The space available for CockroachDB data storage (i.e., the storage capacity of the machine excluding the capacity used by the Cockroach binary, operating system, and other system files).
-Node Status |
The number of [live nodes](#live-nodes) in the cluster.
The number of suspect nodes in the cluster. A node is considered suspect if its liveness status is unavailable or the node is in the process of decommissioning.
The number of [dead nodes](#dead-nodes) in the cluster.
-Replication Status |
The total number of [ranges](architecture/overview.html#glossary) in the cluster.
The number of [under-replicated ranges](admin-ui-replication-dashboard.html#review-of-cockroachdb-terminology) in the cluster. A non-zero number indicates an unstable cluster.
The number of [unavailable ranges](admin-ui-replication-dashboard.html#review-of-cockroachdb-terminology) in the cluster. A non-zero number indicates an unstable cluster.
-
-## Node List
-
-The **Node List** is the default view on the **Overview** page.
-
-
-### Live Nodes
-Live nodes are nodes that are online and responding. They are marked with a green dot. If a node is removed or dies, the dot turns yellow to indicate that it is not responding. If the node remains unresponsive for a certain amount of time (5 minutes by default), the node turns red and is moved to the [**Dead Nodes**](#dead-nodes) section, indicating that it is no longer expected to come back.
-
-The following details are shown for each live node:
-
-Column | Description
--------|------------
-ID | The ID of the node.
-Address | The address of the node. You can click on the address to view further details about the node.
-Uptime | How long the node has been running.
-Replicas | The number of replicas on the node.
-CPUs | The number of CPU cores on the machine.
-Capacity Usage | The storage capacity used by CockroachDB as a percentage of the total usable capacity on the node. The value is represented numerically and as a bar graph.
-Mem Usage | The memory used by CockroachDB as a percentage of the total memory on the node. The value is represented numerically and as a bar graph.
-Version | The build tag of the CockroachDB version installed on the node.
-Logs | Click **Logs** to see detailed logs for the node.
-
-### Dead Nodes
-
-Nodes are considered dead once they have not responded for a certain amount of time (5 minutes by default). At this point, the automated repair process starts, wherein CockroachDB automatically rebalances replicas from the dead node, using the unaffected replicas as sources. See [Stop a Node](stop-a-node.html#how-it-works) for more information.
-
-The following details are shown for each dead node:
-
-Column | Description
--------|------------
-ID | The ID of the node.
-Address | The address of the node. You can click on the address to view further details about the node.
-Down Since | How long the node has been down.
-
-### Decommissioned Nodes
-
-Nodes that have been decommissioned for permanent removal from the cluster are listed in the **Decommissioned Nodes** table.
-
-When you decommission a node, CockroachDB lets the node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node so that it can be safely shut down. See [Remove Nodes](remove-nodes.html) for more information.
-
-## Node Map (Enterprise)
-
-The **Node Map** is an [enterprise-only](enterprise-licensing.html) feature that gives you a visual representation of the geographical configuration of your cluster.
-
-
-
-The Node Map consists of the following components:
-
-### Region component
-
-
-
-{{site.data.alerts.callout_info}}
-For multi-core systems, the user CPU percent can be greater than 100%. Full utilization of one core is considered as 100% CPU usage. If you have n cores, then the user CPU percent can range from 0% (indicating an idle system) to (n*100)% (indicating full utilization).
-{{site.data.alerts.end}}
-
-### Node component
-
-
-
-{{site.data.alerts.callout_info}}
-For multi-core systems, the user CPU percent can be greater than 100%. Full utilization of one core is considered as 100% CPU usage. If you have n cores, then the user CPU percent can range from 0% (indicating an idle system) to (n*100)% (indicating full utilization).
-{{site.data.alerts.end}}
-
-For guidance on enabling and using the node map, see [Enable Node Map](enable-node-map.html).
-
-## See also
-
-- [Troubleshooting Overview](troubleshooting-overview.html)
-- [Support Resources](support-resources.html)
-- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints)
diff --git a/src/current/v2.1/admin-ui-custom-chart-debug-page.md b/src/current/v2.1/admin-ui-custom-chart-debug-page.md
deleted file mode 100644
index c9b18ac8ef4..00000000000
--- a/src/current/v2.1/admin-ui-custom-chart-debug-page.md
+++ /dev/null
@@ -1,59 +0,0 @@
----
-title: Custom Chart Debug Page
-toc: true
----
-
-The **Custom Chart** debug page in the Admin UI can be used to create one or multiple custom charts showing any combination of over [200 available metrics](#available-metrics).
-
-The definition of the customized dashboard is encoded in the URL. To share the dashboard with someone, send them the URL. Like any other URL, it can be bookmarked, sit in a pinned tab in your browser, etc.
-
-
-## Accessing the **Custom Chart** page
-
-To access the **Custom Chart** debug page, [access the Admin UI](admin-ui-access-and-navigate.html), and either:
-
-- Open http://localhost:8080/#/debug/chart in your browser (replacing `localhost` and `8080` with your node's host and port).
-
-- Click the gear icon on the left to access the **Advanced Debugging Page**. In the **Reports** section, click **Custom TimeSeries Chart**.
-
-## Using the **Custom Chart** page
-
-
-
-On the **Custom Chart** page, you can set the time span for all charts, add new custom charts, and customize each chart:
-
-- To set the time span for the page, use the dropdown menu above the charts and select the desired time span.
-
-- To add a chart, click **Add Chart** and customize the new chart.
-
-- To customize each chart, use the **Units** dropdown menu to set the units to display. Then use the table below the chart to select the metrics being queried, and how they'll be combined and displayed. Options include:
-{% include {{page.version.version}}/admin-ui-custom-chart-debug-page-00.html %}
-
-## Examples
-
-### Query user and system CPU usage
-
-
-
-To compare system vs. userspace CPU usage, select the following values under **Metric Name**:
-
-- `sys.cpu.sys.percent`
-- `sys.cpu.user.percent`
-
-The Y-axis label is the **Count**. A count of 1 represents 100% utilization. The **Aggregator** of **Sum** can show the count to be above 1, which would mean CPU utilization is greater than 100%.
-
-Checking **Per Node** displays statistics for each node, which could show whether an individual node's CPU usage was higher or lower than the average.
-
-## Available metrics
-
-{{site.data.alerts.callout_info}}
-This list is taken directly from the source code and is subject to change. Some of the metrics listed below are already visible in other areas of the [Admin UI](admin-ui-overview.html).
-{{site.data.alerts.end}}
-
-{% include {{page.version.version}}/metric-names.md %}
-
-## See also
-
-- [Troubleshooting Overview](troubleshooting-overview.html)
-- [Support Resources](support-resources.html)
-- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints)
diff --git a/src/current/v2.1/admin-ui-databases-page.md b/src/current/v2.1/admin-ui-databases-page.md
deleted file mode 100644
index a50c0e2ae25..00000000000
--- a/src/current/v2.1/admin-ui-databases-page.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Database Page
-toc: true
----
-
-The **Databases** page of the Admin UI provides details of the databases configured, the tables in each database, and the grants assigned to each user. To view these details, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then click **Databases** on the left-hand navigation bar.
-
-
-## Tables view
-
-The **Tables** view shows details of the system table as well as the tables in your databases. To view these details, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then select **Databases** from the left-hand navigation bar.
-
-
-
-The following details are displayed for each table:
-
-Metric | Description
---------|----
-Table Name | The name of the table.
-Size | Approximate total disk size of the table across all replicas.
-Ranges | The number of ranges in the table.
-\# of Columns | The number of columns in the table.
-\# of Indices | The number of indices for the table.
-
-## Grants view
-
-The **Grants** view shows the [privileges](authorization.html#assign-privileges) granted to users for each database. To view these details, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then select **Databases** from the left-hand navigation bar, select **Databases** from the left-hand navigation bar, and then select **Grants** from the **View** menu.
-
-For more details about grants and privileges, see [Grants](grant.html).
-
-
-
-## See also
-
-- [Troubleshooting Overview](troubleshooting-overview.html)
-- [Support Resources](support-resources.html)
-- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints)
diff --git a/src/current/v2.1/admin-ui-debug-pages.md b/src/current/v2.1/admin-ui-debug-pages.md
deleted file mode 100644
index 921cea97b06..00000000000
--- a/src/current/v2.1/admin-ui-debug-pages.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Advanced Debugging Page
-toc: true
----
-
-The **Advanced Debugging** page of the Admin UI provides links to advanced monitoring and troubleshooting reports and cluster configuration details. To view the **Advanced Debugging** page, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then click the gear icon on the left-hand navigation bar.
-
-{{site.data.alerts.callout_info}}
-These pages are experimental and undocumented. If you find an issue, let us know through [these channels](https://www.cockroachlabs.com/community/).
- {{site.data.alerts.end}}
-
-## Reports and Configuration
-
-The following debug reports and configuration views are useful for monitoring and troubleshooting CockroachDB:
-
-Report | Description
---------|----
-[Custom Time Series Chart](admin-ui-custom-chart-debug-page.html) | Create a custom chart of time series data.
-Problem Ranges | View ranges in your cluster that are unavailable, underreplicated, slow, or have other problems.
-Network Latency | Check latencies between all nodes in your cluster.
-Data Distribution and Zone Configs | View the distribution of table data across nodes and verify zone configuration.
-Cluster Settings | View all cluster settings and their configured values.
-Localities | Check node localities for your cluster.
-
-## Even More Advanced Debugging
-
-The **Even More Advanced Debugging** section of the page lists additional reports that are largely internal and intended for use by CockroachDB developers. You can ignore this section while monitoring and troubleshooting CockroachDB. Alternatively, if you want to learn how to use these pages, feel free to contact us through [these channels](https://www.cockroachlabs.com/community/).
-
-## See also
-
-- [Troubleshooting Overview](troubleshooting-overview.html)
-- [Support Resources](support-resources.html)
-- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints)
diff --git a/src/current/v2.1/admin-ui-hardware-dashboard.md b/src/current/v2.1/admin-ui-hardware-dashboard.md
deleted file mode 100644
index c149b5e6973..00000000000
--- a/src/current/v2.1/admin-ui-hardware-dashboard.md
+++ /dev/null
@@ -1,107 +0,0 @@
----
-title: Hardware Dashboard
-summary: The Hardware dashboard lets you monitor CPU usage, disk throughput, network traffic, storage capacity, and memory.
-toc: true
----
-
-The **Hardware** dashboard lets you monitor CPU usage, disk throughput, network traffic, storage capacity, and memory. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left, and then select **Dashboard** > **Hardware**.
-
-The **Hardware** dashboard displays the following time series graphs:
-
-## CPU Percent
-
-
-
-- In the node view, the graph shows the percentage of CPU in use by the CockroachDB process for the selected node.
-
-- In the cluster view, the graph shows the percentage of CPU in use by the CockroachDB process across all nodes.
-
-{{site.data.alerts.callout_info}}
-For multi-core systems, the percentage of CPU usage is calculated by normalizing the CPU usage across all cores, whereby 100% utilization indicates that all cores are fully utilized.
-{{site.data.alerts.end}}
-
-## Memory Usage
-
-
-
-- In the node view, the graph shows the memory in use by CockroachDB for the selected node.
-
-- In the cluster view, the graph shows the memory in use by CockroachDB across all nodes in the cluster.
-
-## Disk Read Bytes
-
-
-
-- In the node view, the graph shows the 10-second average of the number of bytes read per second by all processes, including CockroachDB, for the selected node.
-
-- In the cluster view, the graph shows the 10-second average of the number of bytes read per second by all processes, including CockroachDB, across all nodes.
-
-## Disk Write Bytes
-
-
-
-- In the node view, the graph shows the 10-second average of the number of bytes written per second by all processes, including CockroachDB, for the node.
-
-- In the cluster view, the graph shows the 10-second average of the number of bytes written per second by all processes, including CockroachDB, across all nodes.
-
-## Disk Read Ops
-
-
-
-- In the node view, the graph shows the 10-second average of the number of disk read ops per second for all processes, including CockroachDB, for the selected node.
-
-- In the cluster view, the graph shows the 10-second average of the number of disk read ops per second for all processes, including CockroachDB, across all nodes.
-
-## Disk Write Ops
-
-
-
-- In the node view, the graph shows the 10-second average of the number of disk write ops per second for all processes, including CockroachDB, for the node.
-
-- In the cluster view, the graph shows the 10-second average of the number of disk write ops per second for all processes, including CockroachDB, across all nodes.
-
-## Disk IOPS in Progress
-
-
-
-- In the node view, the graph shows the number of disk reads and writes in queue for all processes, including CockroachDB, for the selected node.
-
-- In the cluster view, the graph shows the number of disk reads and writes in queue for all processes, including CockroachDB, across all nodes in the cluster.
-
-{{site.data.alerts.callout_info}}
-For Mac OS, this graph is not populated and shows zero disk IOPS in progress. This is a [known limitation](https://github.com/cockroachdb/cockroach/issues/27927) that may be lifted in the future.
-{{site.data.alerts.end}}
-
-## Available Disk Capacity
-
-
-
-- In the node view, the graph shows the available storage capacity for the selected node.
-
-- In the cluster view, the graph shows the available storage capacity across all nodes in the cluster.
-
-{{site.data.alerts.callout_info}}
-{% include v2.1/misc/available-capacity-metric.md %}
-{{site.data.alerts.end}}
-
-## Network Bytes Received
-
-
-
-- In the node view, the graph shows the 10-second average of the number of network bytes received per second for all processes, including CockroachDB, for the node.
-
-- In the cluster view, the graph shows the 10-second average of the number of network bytes received for all processes, including CockroachDB, per second across all nodes.
-
-## Network Bytes Sent
-
-
-
-- In the node view, the graph shows the 10-second average of the number of network bytes sent per second by all processes, including CockroachDB, for the node.
-
-- In the cluster view, the graph shows the 10-second average of the number of network bytes sent per second by all processes, including CockroachDB, across all nodes.
-
-## See also
-
-- [Troubleshooting Overview](troubleshooting-overview.html)
-- [Support Resources](support-resources.html)
-- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints)
diff --git a/src/current/v2.1/admin-ui-jobs-page.md b/src/current/v2.1/admin-ui-jobs-page.md
deleted file mode 100644
index 9318d873a73..00000000000
--- a/src/current/v2.1/admin-ui-jobs-page.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Jobs Page
-toc: true
----
-
-The **Jobs** page of the Admin UI provides details about the backup/restore jobs as well as schema changes performed across all nodes in the cluster. To view these details, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then click **Jobs** on the left-hand navigation bar.
-
-
-## Job details
-
-The **Jobs** table displays the ID, description, user, creation time, and status of each backup and restore job, as well as schema changes performed across all nodes in the cluster. To view the job's the full description, click the drop-down arrow in the first column.
-
-
-
-For changefeeds, the table displays a [high-water timestamp that advances as the changefeed progresses](change-data-capture.html#monitor-a-changefeed). This is a guarantee that all changes before or at the timestamp have been emitted. Hover over the high-water timestamp to view the [system time](as-of-system-time.html).
-
-## Filtering results
-
-You can filter the results based on the status of the jobs or the type of jobs (backups, restores, schema changes, or changefeeds). You can also choose to view either the latest 50 jobs or all the jobs across all nodes.
-
-Filter By | Description
-----------|------------
-Job Status | From the **Status** menu, select the required status filter.
-Job Type | From the **Type** menu, select **Backups**, **Restores**, **Imports**, **Schema Changes**, or **Changefeed**.
-Jobs Shown | From the **Show** menu, select **First 50** or **All**.
-
-## See also
-
-- [Troubleshooting Overview](troubleshooting-overview.html)
-- [Support Resources](support-resources.html)
-- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints)
diff --git a/src/current/v2.1/admin-ui-overview-dashboard.md b/src/current/v2.1/admin-ui-overview-dashboard.md
deleted file mode 100644
index 4162cd533f3..00000000000
--- a/src/current/v2.1/admin-ui-overview-dashboard.md
+++ /dev/null
@@ -1,72 +0,0 @@
----
-title: Overview Dashboard
-summary: The Overview dashboard lets you monitor important SQL performance, replication, and storage metrics.
-toc: true
----
-
-The **Overview** dashboard lets you monitor important SQL performance, replication, and storage metrics. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and click **Metrics** on the left-hand navigation bar. The **Overview** dashboard is displayed by default.
-
-
-The **Overview** dashboard displays the following time series graphs:
-
-## SQL Queries
-
-
-
-- In the node view, the graph shows the 10-second average of the number of `SELECT`/`INSERT`/`UPDATE`/`DELETE` queries per second issued by SQL clients on the node.
-
-- In the cluster view, the graph shows the sum of the per-node averages, that is, an aggregate estimation of the current query load over the cluster, assuming the last 10 seconds of activity per node are representative of this load.
-
-## Service Latency: SQL, 99th percentile
-
-
-
-Service latency is calculated as the time between when the cluster receives a query and finishes executing the query. This time does not include returning results to the client.
-
-- In the node view, the graph shows the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency for the node.
-
-- In the cluster view, the graph shows the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency across all nodes in the cluster.
-
-## Replicas per Node
-
-
-
-Ranges are subsets of your data, which are replicated to ensure survivability. Ranges are replicated to a configurable number of CockroachDB nodes.
-
-- In the node view, the graph shows the number of range replicas on the selected node.
-
-- In the cluster view, the graph shows the number of range replicas on each node in the cluster.
-
-For details about how to control the number and location of replicas, see [Configure Replication Zones](configure-replication-zones.html).
-
-{{site.data.alerts.callout_info}}
-The timeseries data used to power the graphs in the Admin UI is stored within the cluster and accumulates for 30 days before it starts getting truncated. As a result, for the first 30 days or so of a cluster's life, you will see a steady increase in disk usage and the number of ranges even if you aren't writing data to the cluster yourself. For more details, see this [FAQ](operational-faqs.html#why-is-disk-usage-increasing-despite-lack-of-writes).
-{{site.data.alerts.end}}
-
-## Capacity
-
-
-
-You can monitor the **Capacity** graph to determine when additional storage is needed.
-
-- In the node view, the graph shows the maximum allocated capacity, available storage capacity, and capacity used by CockroachDB for the selected node.
-
-- In the cluster view, the graph shows the maximum allocated capacity, available storage capacity, and capacity used by CockroachDB across all nodes in the cluster.
-
-On hovering over the graph, the values for the following metrics are displayed:
-
-Metric | Description
---------|----
-**Capacity** | The maximum storage capacity allocated to CockroachDB. You can configure the maximum storage capacity for a given node using the `--store` flag. For more information, see [Start a Node](start-a-node.html#store).
-**Available** | The free storage capacity available to CockroachDB.
-**Used** | Disk space used by the data in the CockroachDB store. Note that this value is less than (**Capacity** - **Available**) because **Capacity** and **Available** metrics consider the entire disk and all applications on the disk, including CockroachDB, whereas **Used** metric tracks only the store's disk usage.
-
-{{site.data.alerts.callout_info}}
-{% include v2.1/misc/available-capacity-metric.md %}
-{{site.data.alerts.end}}
-
-## See also
-
-- [Troubleshooting Overview](troubleshooting-overview.html)
-- [Support Resources](support-resources.html)
-- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints)
diff --git a/src/current/v2.1/admin-ui-overview.md b/src/current/v2.1/admin-ui-overview.md
deleted file mode 100644
index 6ce7852c537..00000000000
--- a/src/current/v2.1/admin-ui-overview.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Admin UI Overview
-summary: Use the Admin UI to monitor and optimize cluster performance.
-toc: false
-key: explore-the-admin-ui.html
----
-
-The CockroachDB Admin UI provides details about your cluster and database configuration, and helps you optimize cluster performance by monitoring the following areas:
-
-Area | Description
---------|----
-[Node Map](enable-node-map.html) | View and monitor the metrics and geographical configuration of your cluster.
-[Cluster Health](admin-ui-access-and-navigate.html#summary-panel) | View essential metrics about the cluster's health, such as the number of live, dead, and suspect nodes, the number of unavailable ranges, and the queries per second and service latency across the cluster.
-[Overview Metrics](admin-ui-overview-dashboard.html) | View important SQL performance, replication, and storage metrics.
-[Hardware Metrics](admin-ui-hardware-dashboard.html) | View metrics about CPU usage, disk throughput, network traffic, storage capacity, and memory.
-[Runtime Metrics](admin-ui-runtime-dashboard.html) | View metrics about node count, CPU time, and memory usage.
-[SQL Performance](admin-ui-sql-dashboard.html) | View metrics about SQL connections, byte traffic, queries, transactions, and service latency.
-[Storage Utilization](admin-ui-storage-dashboard.html) | View metrics about storage capacity and file descriptors.
-[Replication Details](admin-ui-replication-dashboard.html) | View metrics about how data is replicated across the cluster, such as range status, replicas per store, and replica quiescence.
-[Nodes Details](admin-ui-access-and-navigate.html#summary-panel) | View details of live, dead, and decommissioned nodes.
-[Events](admin-ui-access-and-navigate.html#events-panel) | View a list of recent cluster events.
-[Database Details](admin-ui-databases-page.html) | View details about the system and user databases in the cluster.
-[Statements Details](admin-ui-statements-page.html) | Identify frequently executed or high latency [SQL statements](sql-statements.html)
-[Jobs Details](admin-ui-jobs-page.html) | View details of the jobs running in the cluster.
-[Advanced Debugging Pages](admin-ui-debug-pages.html) | View advanced monitoring and troubleshooting reports.
-
-The Admin UI also provides details about the way data is **Distributed**, the state of specific **Queues**, and metrics for **Slow Queries**, but these details are largely internal and intended for use by CockroachDB developers.
-
-{{site.data.alerts.callout_info}}
-By default, the Admin UI shares anonymous usage details with Cockroach Labs. For information about the details shared and how to opt-out of reporting, see [Diagnostics Reporting](diagnostics-reporting.html).
-{{site.data.alerts.end}}
-
-## See also
-
-- [Troubleshooting Overview](troubleshooting-overview.html)
-- [Support Resources](support-resources.html)
-- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints)
diff --git a/src/current/v2.1/admin-ui-replication-dashboard.md b/src/current/v2.1/admin-ui-replication-dashboard.md
deleted file mode 100644
index 4c34d6de26c..00000000000
--- a/src/current/v2.1/admin-ui-replication-dashboard.md
+++ /dev/null
@@ -1,98 +0,0 @@
----
-title: Replication Dashboard
-summary: The Replication dashboard lets you monitor the replication metrics for your cluster.
-toc: true
----
-
-The **Replication** dashboard in the CockroachDB Admin UI enables you to monitor the replication metrics for your cluster. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left-hand navigation bar, and then select **Dashboard** > **Replication**.
-
-
-## Review of CockroachDB terminology
-
-- **Range**: CockroachDB stores all user data and almost all system data in a giant sorted map of key-value pairs. This keyspace is divided into "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range.
-- **Range Replica:** CockroachDB replicates each range (3 times by default) and stores each replica on a different node.
-- **Range Lease:** For each range, one of the replicas holds the "range lease". This replica, referred to as the "leaseholder", is the one that receives and coordinates all read and write requests for the range.
-- **Under-replicated Ranges:** When a cluster is first initialized, the few default starting ranges will only have a single replica, but as soon as other nodes are available, they will replicate to them until they've reached their desired replication factor, the default being 3. If a range does not have enough replicas, the range is said to be "under-replicated".
-- **Unavailable Ranges:** If a majority of a range's replicas are on nodes that are unavailable, then the entire range is unavailable and will be unable to process queries.
-
-For more details, see [Scalable SQL Made Easy: How CockroachDB Automates Operations](https://www.cockroachlabs.com/blog/automated-rebalance-and-repair/)
-
-## Replication dashboard
-
-The **Replication** dashboard displays the following time series graphs:
-
-### Ranges
-
-
-
-The **Ranges** graph shows you various details about the status of ranges.
-
-- In the node view, the graph shows details about ranges on the node.
-
-- In the cluster view, the graph shows details about ranges across all nodes in the cluster.
-
-On hovering over the graph, the values for the following metrics are displayed:
-
-Metric | Description
---------|----
-Ranges | The number of ranges.
-Leaders | The number of ranges with leaders. If the number does not match the number of ranges for a long time, troubleshoot your cluster.
-Lease Holders | The number of ranges that have leases.
-Leaders w/o Leases | The number of Raft leaders without leases. If the number if non-zero for a long time, troubleshoot your cluster.
-Unavailable | The number of unavailable ranges. If the number if non-zero for a long time, troubleshoot your cluster.
-Under-replicated | The number of under-replicated ranges.
-
-### Replicas Per Store
-
-
-
-- In the node view, the graph shows the number of range replicas on the store.
-
-- In the cluster view, the graph shows the number of range replicas on each store.
-
-You can [Configure replication zones](configure-replication-zones.html) to set the number and location of replicas. You can monitor the configuration changes using the Admin UI, as described in [Fault tolerance and recovery](demo-fault-tolerance-and-recovery.html).
-
-### Replica Quiescence
-
-
-
-- In the node view, the graph shows the number of replicas on the node.
-
-- In the cluster view, the graph shows the number of replicas across all nodes.
-
-On hovering over the graph, the values for the following metrics are displayed:
-
-Metric | Description
---------|----
-Replicas | The number of replicas.
-Quiescent | The number of replicas that haven't been accessed for a while.
-
-### Snapshots
-
-
-
-Usually the nodes in a [Raft group](architecture/replication-layer.html#raft) stay synchronized by following along the log message by message. However, if a node is far enough behind the log (e.g., if it was offline or is a new node getting up to speed), rather than send all the individual messages that changed the range, the cluster can send it a snapshot of the range and it can start following along from there. Commonly this is done preemptively, when the cluster can predict that a node will need to catch up, but occasionally the Raft protocol itself will request the snapshot.
-
-Metric | Description
--------|------------
-Generated | The number of snapshots created per second.
-Applied (Raft-initiated) | The number of snapshots applied to nodes per second that were initiated within Raft.
-Applied (Preemptive) | The number of snapshots applied to nodes per second that were anticipated ahead of time (e.g., because a node was about to be added to a Raft group).
-Reserved | The number of slots reserved per second for incoming snapshots that will be sent to a node.
-
-### Other graphs
-
-The **Replication** dashboard shows other time series graphs that are important for CockroachDB developers:
-
-- Leaseholders per Store
-- Average Queries per Store
-- Logical Bytes per Store
-- Range Operations
-
-For monitoring CockroachDB, it is sufficient to use the [**Ranges**](#ranges), [**Replicas per Store**](#replicas-per-store), and [**Replica Quiescence**](#replica-quiescence) graphs.
-
-## See also
-
-- [Troubleshooting Overview](troubleshooting-overview.html)
-- [Support Resources](support-resources.html)
-- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints)
diff --git a/src/current/v2.1/admin-ui-runtime-dashboard.md b/src/current/v2.1/admin-ui-runtime-dashboard.md
deleted file mode 100644
index 51182c80e43..00000000000
--- a/src/current/v2.1/admin-ui-runtime-dashboard.md
+++ /dev/null
@@ -1,75 +0,0 @@
----
-title: Runtime Dashboard
-toc: true
----
-
-The **Runtime** dashboard in the CockroachDB Admin UI lets you monitor runtime metrics for you cluster, such as node count, memory usage, and CPU time. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left-hand navigation bar, and then select **Dashboard** > **Runtime**.
-
-
-The **Runtime** dashboard displays the following time series graphs:
-
-## Live Node Count
-
-
-
-In the node view as well as the cluster view, the graph shows the number of live nodes in the cluster.
-
-A dip in the graph indicates decommissioned nodes, dead nodes, or nodes that are not responding. To troubleshoot the dip in the graph, refer to the [Summary panel](admin-ui-access-and-navigate.html#summary-panel).
-
-## Memory Usage
-
-
-
-- In the node view, the graph shows the memory in use for the selected node.
-
-- In the cluster view, the graph shows the memory in use across all nodes in the cluster.
-
-On hovering over the graph, the values for the following metrics are displayed:
-
-Metric | Description
---------|----
-RSS | Total memory in use by CockroachDB.
-Go Allocated | Memory allocated by the Go layer.
-Go Total | Total memory managed by the Go layer.
-CGo Allocated | Memory allocated by the C layer.
-CGo Total | Total memory managed by the C layer.
-
-{{site.data.alerts.callout_info}}If Go Total or CGO Total fluctuates or grows steadily over time, contact us.{{site.data.alerts.end}}
-
-## CPU Time
-
-
-
-
-- In the node view, the graph shows the [CPU time](https://en.wikipedia.org/wiki/CPU_time) used by CockroachDB user and system-level operations for the selected node.
-- In the cluster view, the graph shows the [CPU time](https://en.wikipedia.org/wiki/CPU_time) used by CockroachDB user and system-level operations across all nodes in the cluster.
-
-On hovering over the CPU Time graph, the values for the following metrics are displayed:
-
-Metric | Description
---------|----
-User CPU Time | Total CPU seconds per second used by the CockroachDB process across all nodes.
-Sys CPU Time | Total CPU seconds per second used for CockroachDB system-level operations across all nodes.
-
-## Clock Offset
-
-
-
-- In the node view, the graph shows the mean clock offset of the node against the rest of the cluster.
-- In the cluster view, the graph shows the mean clock offset of each node against the rest of the cluster.
-
-## Other graphs
-
-The **Runtime** dashboard shows other time series graphs that are important for CockroachDB developers:
-
-- Goroutine Count
-- GC Runs
-- GC Pause Time
-
-For monitoring CockroachDB, it is sufficient to use the [**Live Node Count**](#live-node-count), [**Memory Usage**](#memory-usage), [**CPU Time**](#cpu-time), and [**Clock Offset**](#clock-offset) graphs.
-
-## See also
-
-- [Troubleshooting Overview](troubleshooting-overview.html)
-- [Support Resources](support-resources.html)
-- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints)
diff --git a/src/current/v2.1/admin-ui-sql-dashboard.md b/src/current/v2.1/admin-ui-sql-dashboard.md
deleted file mode 100644
index 072de16f1c2..00000000000
--- a/src/current/v2.1/admin-ui-sql-dashboard.md
+++ /dev/null
@@ -1,82 +0,0 @@
----
-title: SQL Dashboard
-summary: The SQL dashboard lets you monitor the performance of your SQL queries.
-toc: true
----
-
-The **SQL** dashboard in the CockroachDB Admin UI lets you monitor the performance of your SQL queries. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left-hand navigation bar, and then select **Dashboard** > **SQL**.
-
-
-The **SQL** dashboard displays the following time series graphs:
-
-## SQL Connections
-
-
-
-- In the node view, the graph shows the number of connections currently open between the client and the selected node.
-
-- In the cluster view, the graph shows the total number of SQL client connections to all nodes combined.
-
-## SQL Byte Traffic
-
-
-
-The **SQL Byte Traffic** graph helps you correlate SQL query count to byte traffic, especially in bulk data inserts or analytic queries that return data in bulk.
-
-- In the node view, the graph shows the current byte throughput (bytes/second) between all the currently connected SQL clients and the node.
-
-- In the cluster view, the graph shows the aggregate client throughput across all nodes.
-
-## SQL Queries
-
-
-
-- In the node view, the graph shows the 10-second average of the number of `SELECT`/`INSERT`/`UPDATE`/`DELETE` queries per second issued by SQL clients on the node.
-
-- In the cluster view, the graph shows the sum of the per-node averages, that is, an aggregate estimation of the current query load over the cluster, assuming the last 10 seconds of activity per node are representative of this load.
-
-## SQL Query Errors
-
-
-
-- In the node view, the graph shows the 10-second average of the number of SQL statements issued to the node that returned a [planning](architecture/sql-layer.html#sql-parser-planner-executor), [runtime](architecture/sql-layer.html#sql-parser-planner-executor), or [retry error](transactions.html#error-handling).
-
-- In the cluster view, the graph shows the 10-second average of the number of SQL statements that returned a [planning](architecture/sql-layer.html#sql-parser-planner-executor), [runtime](architecture/sql-layer.html#sql-parser-planner-executor), or [retry error](transactions.html#error-handling) across all nodes.
-
-## Service Latency: SQL, 99th percentile
-
-
-
-Service latency is calculated as the time between when the cluster receives a query and finishes executing the query. This time does not include returning results to the client.
-
-- In the node view, the graph displays the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency for the selected node.
-
-- In the cluster view, the graph displays the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency for each node in the cluster.
-
-## Transactions
-
-
-
-- In the node view, the graph shows the 10-second average of the number of opened, committed, aborted, and rolled back [transactions](transactions.html) per second issued by SQL clients on the node.
-
-- In the cluster view, the graph shows the sum of the per-node averages, that is, an aggregate estimation of the current [transactions](transactions.html) load over the cluster, assuming the last 10 seconds of activity per node are representative of this load.
-
-If the graph shows excessive aborts or rollbacks, it might indicate issues with the SQL queries. In that case, re-examine queries to lower contention.
-
-## Other graphs
-
-The **SQL** dashboard shows other time series graphs that are important for CockroachDB developers:
-
-- Execution Latency
-- Active Distributed SQL Queries
-- Active Flows for Distributed SQL Queries
-- Service Latency: DistSQL
-- Schema Changes
-
-For monitoring CockroachDB, it is sufficient to use the [**SQL Connections**](#sql-connections), [**SQL Byte Traffic**](#sql-byte-traffic), [**SQL Queries**](#sql-queries), [**Service Latency**](#service-latency-sql-99th-percentile), and [**Transactions**](#transactions) graphs.
-
-## See also
-
-- [Troubleshooting Overview](troubleshooting-overview.html)
-- [Support Resources](support-resources.html)
-- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints)
diff --git a/src/current/v2.1/admin-ui-statements-page.md b/src/current/v2.1/admin-ui-statements-page.md
deleted file mode 100644
index 88772c4b63e..00000000000
--- a/src/current/v2.1/admin-ui-statements-page.md
+++ /dev/null
@@ -1,124 +0,0 @@
----
-title: Statements Page
-toc: true
----
-
-New in v2.1: The **Statements** page helps you identify frequently executed or high latency [SQL statements](sql-statements.html). The **Statements** page also allows you to view the details of an individual SQL statement by clicking on the statement to view the **Statement Details** page.
-
-To view the **Statements** page, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then click **Statements** on the left.
-
-
-
-## Limitation
-
-The **Statements** page displays the details of the SQL statements executed within a specified time interval. At the end of the interval, the display is wiped clean, and you'll not see any statements on the page until the next set of statements is executed. By default, the time interval is set to one hour; however, you can customize the interval using the [`diagnostics.reporting.interval`](cluster-settings.html#settings) cluster setting.
-
-## Selecting an application
-
-If you have multiple applications running on the cluster, the **Statements** page shows the statements from all of the applications by default. To view the statements pertaining to a particular application, select the particular application from the **App** dropdown menu.
-
-## Understanding the Statements page
-
-### SQL statement fingerprint
-
-The **Statements** page displays the details of SQL statement fingerprints instead of individual SQL statements.
-
-A statement fingerprint is a grouping of similar SQL statements in their abstracted form by replacing the literal values with underscores (`_`). Grouping similar SQL statements as fingerprints helps you quickly identify frequently executed SQL statements and their latencies.
-
-A statement fingerprint is generated when two or more statements are the same after any literal values in them (e.g.,numbers and strings) are replaced with underscores. For example, the following statements have the same once their numbers have been replaced with underscores:
-
-- `INSERT INTO new_order(product_id, customer_id, transaction_id) VALUES (380, 11, 11098)`
-- `INSERT INTO new_order(product_id, customer_id, transaction_id) VALUES (192, 891, 20)`
-- `INSERT INTO new_order(product_id, customer_id, transaction_id) VALUES (784, 452, 78)`
-
-Thus, they can have the same fingerprint:
-
-`INSERT INTO new_order(product_id, customer_id, no_w_id) VALUES (_, _, _)`
-
-The following statements are different enough to not have the same fingerprint:
-
-- `INSERT INTO orders(product_id, customer_id, transaction_id) VALUES (380, 11, 11098)`
-- `INSERT INTO new_order(product_id, customer_id, transaction_id) VALUES (380, 11, 11098)`
-- `INSERT INTO new_order(product_id, customer_id, transaction_id) VALUES ($1, 11, 11098)`
-- `INSERT INTO new_order(product_id, customer_id, transaction_id) VALUES ($1, $2, 11098)`
-- `INSERT INTO new_order(product_id, customer_id, transaction_id) VALUES ($1, $2, $3)`
-
-### Parameters
-
-The **Statements** page displays the time, execution count, number of [retries](transactions.html#transaction-retries), number of rows affected, and latency for each statement fingerprint. By default, the statement fingerprints are sorted by time; however, you can sort the table by execution count, retries, rows affected, and latency.
-
-The following details are provided for each statement fingerprint:
-
-Parameter | Description
------|------------
-Statement | The SQL statement or the fingerprint of similar SQL statements.
To view additional details of a statement fingerprint, click on the statement fingerprint in the **Statement** column to see the [**Statement Details** page](#statement-details-page).
-Time | The cumulative time taken to execute the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation).
-Execution Count | The total number of times the SQL statement (or multiple statements having the same fingerprint) is executed within the last hour or the [specified time interval](#limitation).
The execution count is displayed in numerical value as well as in the form of a horizontal bar. The bar is color-coded to indicate the ratio of runtime success (indicated by blue) to runtime failure (indicated by red) of the execution count for the fingerprint. The bar also helps you compare the execution count across all SQL fingerprints in the table.
You can sort the table by count.
-Retries | The cumulative number of retries to execute the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation).
-Rows Affected | The average number of rows returned while executing the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation).
The number of rows returned are represented in two ways: The numerical value shows the number of rows returned, while the horizontal bar is color-coded (blue indicates the mean value and yellow indicates one standard deviation of the mean value of the number of rows returned). The bar helps you compare the mean rows across all SQL fingerprints in the table.
You can sort the table by rows returned.
-Latency | The average service latency of the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation).
The latency is represented in two ways: The numerical value shows the mean latency, while the horizontal bar is color-coded (blue indicates the mean value and yellow indicates one standard deviation of the mean value of latency). The bar also helps you compare the mean latencies across all SQL fingerprints in the table.
You can sort the table by latency.
-
-## Statement Details page
-
-The **Statement Details** page displays the details of the time, execution count, retries, rows returned, and latency by phase and by gateway node for the selected statement fingerprint.
-
-
-
-### Latency by Phase
-
-The **Latency by Phase** table provides the mean value and one standard deviation of the mean value of the overall service latency as well as latency for each execution phase (parse, plan, run) for the SQL statement (or multiple statements having the same fingerprint). The table provides the service latency details in numerical values as well as color-coded bar graphs: blue indicates the mean value and yellow indicates one standard deviation of the mean value of latency.
-
-### Statistics by Gateway Node
-
-The **Statistics by Gateway Node** table provides a breakdown of the number of statements of the selected fingerprint per gateway node. For each gateway node, the table also provides the following details:
-
-Parameter | Description
------|------------
-Node | The ID of the gateway node.
-Time | The cumulative time taken to execute the statement within the last hour or the [specified time interval](#limitation).
-Execution Count | The total number of times the SQL statement (or multiple statements having the same fingerprint) is executed.
-Retries | The cumulative number of retries to execute the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation).
-Rows Affected | The average number of rows returned while executing the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation).
The number of rows returned are represented in two ways: The numerical value shows the number of rows returned, while the horizontal bar is color-coded (blue indicates the mean value and yellow indicates one standard deviation of the mean value of the number of rows returned). The bar helps you compare the mean rows across all SQL fingerprints in the table.
You can sort the table by rows returned.
-Latency | The average service latency of the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation).
The latency is represented in two ways: The numerical value shows the mean latency, while the horizontal bar is color-coded (blue indicates the mean value and yellow indicates one standard deviation of the mean value). The bar also helps you compare the mean latencies across all SQL fingerprints in the table.
You can sort the table by latency.
-
-### Execution Count
-
-The **Execution Count** table provides information about the following parameters in numerical values as well as bar graphs:
-
-Parameter | Description
------|------------
-First Attempts | The cumulative number of first attempts to execute the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation).
-Retries | The cumulative number of retries to execute the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation).
-Max Retries | The highest number of retries for a single SQL statement with this fingerprint within the last hour or the [specified time interval](#limitation).
For example, if three statements having the same fingerprint had to be retried 0, 1, and 5 times, then the Max Retries value for the fingerprint is 5.
-Total | The total number of executions of statements with this fingerprint. It is calculated as the sum of first attempts and cumulative retries.
-
-### Row Count
-
-The **Row Count** table provides the mean value and one standard deviation of the mean value of cumulative count of rows returned by the SQL statement (or multiple statements having the same fingerprint). The table provides the service latency details in numerical values as well as a bar graph.
-
-### Statistics
-
-The statistics box on the right-hand side of the **Statements Details** page provides the following details for the statement fingerprint:
-
-Parameter | Description
------|------------
-Total time | The cumulative time taken to execute the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation).
-Execution count | The total number of times the SQL statement (or multiple statements having the same fingerprint) is executed within the last hour or the [specified time interval](#limitation).
-Executed without retry | The percentage of successful executions of the SQL statement (or multiple statements having the same fingerprint) on the first attempt within the last hour or the [specified time interval](#limitation).
-Mean service latency | The average service latency of the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation).
-Mean number of rows | The average number of rows returned while executing the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation).
-
-The table below the statistics box provides the following details:
-
-Parameter | Description
------|------------
-App | Name of the application specified by the [`application_name`](show-vars.html#supported-variables) session setting. The **Statements Details** page shows the details for this application.
-Distributed execution? | Indicates whether the statement execution was distributed.
-Used cost-based optimizer? | Indicates whether the statement (or multiple statements having the same fingerprint) were executed using the [cost-based optimizer](cost-based-optimizer.html).
-Failed? | Indicate if the statement (or multiple statements having the same fingerprint) were executed successfully.
-
-## See also
-
-- [Troubleshooting Overview](troubleshooting-overview.html)
-- [Support Resources](support-resources.html)
-- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints)
diff --git a/src/current/v2.1/admin-ui-storage-dashboard.md b/src/current/v2.1/admin-ui-storage-dashboard.md
deleted file mode 100644
index 300d30317ab..00000000000
--- a/src/current/v2.1/admin-ui-storage-dashboard.md
+++ /dev/null
@@ -1,68 +0,0 @@
----
-title: Storage Dashboard
-summary: The Storage dashboard lets you monitor the storage utilization for your cluster.
-toc: true
----
-
-The **Storage** dashboard in the CockroachDB Admin UI lets you monitor the storage utilization for your cluster. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left-hand navigation bar, and then select **Dashboard** > **Storage**.
-
-
-The **Storage** dashboard displays the following time series graphs:
-
-## Capacity
-
-
-
-You can monitor the **Capacity** graph to determine when additional storage is needed.
-
-- In the node view, the graph shows the maximum allocated capacity, available storage capacity, and capacity used by CockroachDB for the selected node.
-
-- In the cluster view, the graph shows the maximum allocated capacity, available storage capacity, and capacity used by CockroachDB across all nodes in the cluster.
-
-On hovering over the graph, the values for the following metrics are displayed:
-
-Metric | Description
---------|----
-**Capacity** | The maximum storage capacity allocated to CockroachDB. You can configure the maximum storage capacity for a given node using the `--store` flag. For more information, see [Start a Node](start-a-node.html#store).
-**Available** | The free storage capacity available to CockroachDB.
-**Used** | Disk space used by the data in the CockroachDB store. Note that this value is less than (**Capacity** - **Available**) because **Capacity** and **Available** metrics consider the entire disk and all applications on the disk, including CockroachDB, whereas **Used** metric tracks only the store's disk usage.
-
-{{site.data.alerts.callout_info}}
-{% include v2.1/misc/available-capacity-metric.md %}
-{{site.data.alerts.end}}
-
-## File Descriptors
-
-
-
-- In the node view, the graph shows the number of open file descriptors for that node, compared with the file descriptor limit.
-
-- In the cluster view, the graph shows the number of open file descriptors across all nodes, compared with the file descriptor limit.
-
-If the Open count is almost equal to the Limit count, increase [File Descriptors](recommended-production-settings.html#file-descriptors-limit).
-
-{{site.data.alerts.callout_info}}
-If you are running multiple nodes on a single machine (not recommended), the actual number of open file descriptors are considered open on each node. Thus the limit count value displayed on the Admin UI is the actual value of open file descriptors multiplied by the number of nodes, compared with the file descriptor limit.
-{{site.data.alerts.end}}
-
-For Windows systems, you can ignore the File Descriptors graph because the concept of file descriptors is not applicable to Windows.
-
-## Other graphs
-
-The **Storage** dashboard shows other time series graphs that are important for CockroachDB developers:
-
-- Live Bytes
-- Log Commit Latency
-- Command Commit Latency
-- RocksDB Read Amplification
-- RocksDB SSTables
-- Time Series Writes
-- Time Series Bytes Written
-
-For monitoring CockroachDB, it is sufficient to use the [**Capacity**](#capacity) and [**File Descriptors**](#file-descriptors) graphs.
-
-## See also
-
-- [Troubleshooting Overview](troubleshooting-overview.html)
-- [Support Resources](support-resources.html)
-- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints)
diff --git a/src/current/v2.1/alter-column.md b/src/current/v2.1/alter-column.md
deleted file mode 100644
index 6fb1bea5eb0..00000000000
--- a/src/current/v2.1/alter-column.md
+++ /dev/null
@@ -1,76 +0,0 @@
----
-title: ALTER COLUMN
-summary: Use the ALTER COLUMN statement to set, change, or drop a column's DEFAULT constraint or to drop the NOT NULL constraint.
-toc: true
----
-
-The `ALTER COLUMN` [statement](sql-statements.html) is part of `ALTER TABLE` and sets, changes, or drops a column's [`DEFAULT` constraint](default-value.html) or drops the [`NOT NULL` constraint](not-null.html).
-
-{{site.data.alerts.callout_info}}
-To manage other constraints, see [`ADD CONSTRAINT`](add-constraint.html) and [`DROP CONSTRAINT`](drop-constraint.html).
-{{site.data.alerts.end}}
-
-
-## Synopsis
-
-
-{% include {{ page.version.version }}/sql/diagrams/alter_column.html %}
-
-
-## Required privileges
-
-The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the table.
-
-## Parameters
-
-| Parameter | Description |
-|-----------|-------------|
-| `table_name` | The name of the table with the column you want to modify. |
-| `column_name` | The name of the column you want to modify. |
-| `a_expr` | The new [Default Value](default-value.html) you want to use. |
-
-## Viewing schema changes
-
-{% include {{ page.version.version }}/misc/schema-change-view-job.md %}
-
-## Examples
-
-### Set or change a `DEFAULT` value
-
-Setting the [`DEFAULT` value constraint](default-value.html) inserts the value when data's written to the table without explicitly defining the value for the column. If the column already has a `DEFAULT` value set, you can use this statement to change it.
-
-The below example inserts the Boolean value `true` whenever you inserted data to the `subscriptions` table without defining a value for the `newsletter` column.
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE subscriptions ALTER COLUMN newsletter SET DEFAULT true;
-~~~
-
-### Remove `DEFAULT` constraint
-
-If the column has a defined [`DEFAULT` value](default-value.html), you can remove the constraint, which means the column will no longer insert a value by default if one is not explicitly defined for the column.
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE subscriptions ALTER COLUMN newsletter DROP DEFAULT;
-~~~
-
-### Remove `NOT NULL` constraint
-
-If the column has the [`NOT NULL` constraint](not-null.html) applied to it, you can remove the constraint, which means the column becomes optional and can have *NULL* values written into it.
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE subscriptions ALTER COLUMN newsletter DROP NOT NULL;
-~~~
-
-### Convert a computed column into a regular column
-
-New in v2.1: {% include {{ page.version.version }}/computed-columns/convert-computed-column.md %}
-
-## See also
-
-- [Constraints](constraints.html)
-- [`ADD CONSTRAINT`](add-constraint.html)
-- [`DROP CONSTRAINT`](drop-constraint.html)
-- [`ALTER TABLE`](alter-table.html)
diff --git a/src/current/v2.1/alter-database.md b/src/current/v2.1/alter-database.md
deleted file mode 100644
index b8a57fe5093..00000000000
--- a/src/current/v2.1/alter-database.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: ALTER DATABASE
-summary: Use the ALTER DATABASE statement to change an existing database.
-toc: false
----
-
-The `ALTER DATABASE` [statement](sql-statements.html) applies a schema change to a database.
-
-{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %}
-
-For information on using `ALTER DATABASE`, see the documents for its relevant subcommands.
-
-Subcommand | Description
------------|------------
-[`CONFIGURE ZONE`](configure-zone.html) | New in v2.1: [Configure replication zones](configure-replication-zones.html) for a database.
-[`RENAME`](rename-database.html) | Change the name of a database.
diff --git a/src/current/v2.1/alter-index.md b/src/current/v2.1/alter-index.md
deleted file mode 100644
index 61067b96a49..00000000000
--- a/src/current/v2.1/alter-index.md
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: ALTER INDEX
-summary: Use the ALTER INDEX statement to change an existing index.
-toc: false
----
-
-The `ALTER INDEX` [statement](sql-statements.html) applies a schema change to an index.
-
-{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %}
-
-For information on using `ALTER INDEX`, see the documents for its relevant subcommands.
-
-Subcommand | Description
------------|------------
-[`CONFIGURE ZONE`](configure-zone.html) | New in v2.1: [Configure replication zones](configure-replication-zones.html) for an index.
-[`RENAME`](rename-index.html) | Change the name of an index.
-[`SPLIT AT`](split-at.html) | Force a key-value layer range split at the specified row in the index.
diff --git a/src/current/v2.1/alter-range.md b/src/current/v2.1/alter-range.md
deleted file mode 100644
index d39f52ca98d..00000000000
--- a/src/current/v2.1/alter-range.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: ALTER RANGE
-summary: Use the ALTER RANGE statement to change an existing system range.
-toc: false
----
-
-New in v2.1: The `ALTER RANGE` [statement](sql-statements.html) applies a schema change to a system range.
-
-{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %}
-
-For information on using `ALTER RANGE`, see the documents for its relevant subcommands.
-
-Subcommand | Description
------------|------------
-[`CONFIGURE ZONE`](configure-zone.html) | [Configure replication zones](configure-replication-zones.html) for a system range.
diff --git a/src/current/v2.1/alter-sequence.md b/src/current/v2.1/alter-sequence.md
deleted file mode 100644
index 68014a5f205..00000000000
--- a/src/current/v2.1/alter-sequence.md
+++ /dev/null
@@ -1,117 +0,0 @@
----
-title: ALTER SEQUENCE
-summary: Use the ALTER SEQUENCE statement to change the name, increment values, and other settings of a sequence.
-toc: true
----
-
-The `ALTER SEQUENCE` [statement](sql-statements.html) [changes the name](rename-sequence.html), increment values, and other settings of a sequence.
-
-{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %}
-
-## Required privileges
-
-The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the parent database.
-
-## Synopsis
-
-{% include {{ page.version.version }}/sql/diagrams/alter_sequence_options.html %}
-
-## Parameters
-
-
-
- Parameter | Description
------------|------------
-`IF EXISTS` | Modify the sequence only if it exists; if it does not exist, do not return an error.
-`sequence_name` | The name of the sequence you want to modify.
-`INCREMENT` | The new value by which the sequence is incremented. A negative number creates a descending sequence. A positive number creates an ascending sequence.
-`MINVALUE` | The new minimum value of the sequence.
Default: `1`
-`MAXVALUE` | The new maximum value of the sequence.
Default: `9223372036854775807`
-`START` | The value the sequence starts at if you `RESTART` or if the sequence hits the `MAXVALUE` and `CYCLE` is set.
`RESTART` and `CYCLE` are not implemented yet.
-`CYCLE` | The sequence will wrap around when the sequence value hits the maximum or minimum value. If `NO CYCLE` is set, the sequence will not wrap.
-
-## Examples
-
-### Change the increment value of a sequence
-
-In this example, we're going to change the increment value of a sequence from its current state (i.e., `1`) to `2`.
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER SEQUENCE customer_seq INCREMENT 2;
-~~~
-
-Next, we'll add another record to the table and check that the new record adheres to the new sequence.
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO customer_list (customer, address) VALUES ('Marie', '333 Ocean Ave');
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM customer_list;
-~~~
-~~~
-+----+----------+--------------------+
-| id | customer | address |
-+----+----------+--------------------+
-| 1 | Lauren | 123 Main Street |
-| 2 | Jesse | 456 Broad Ave |
-| 3 | Amruta | 9876 Green Parkway |
-| 5 | Marie | 333 Ocean Ave |
-+----+----------+--------------------+
-~~~
-
-### Set the next value of a sequence
-
-In this example, we're going to change the next value of the example sequence (`customer_seq`). Currently, the next value will be `7` (i.e., `5` + `INCREMENT 2`). We will change the next value to `20`.
-
-{{site.data.alerts.callout_info}}You cannot set a value outside the MAXVALUE or MINVALUE of the sequence. {{site.data.alerts.end}}
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT setval('customer_seq', 20, false);
-~~~
-~~~
-+--------+
-| setval |
-+--------+
-| 20 |
-+--------+
-~~~
-
-{{site.data.alerts.callout_info}}
-The `setval('seq_name', value, is_called)` function in CockroachDB SQL mimics the `setval()` function in PostgreSQL, but it does not store the `is_called` flag. Instead, it sets the value to `val - increment` for `false` or `val` for `true`.
-{{site.data.alerts.end}}
-
-Let's add another record to the table to check that the new record adheres to the new next value.
-
-{% include copy-clipboard.html %}
-~~~ sql
-> INSERT INTO customer_list (customer, address) VALUES ('Lola', '333 Schermerhorn');
-~~~
-~~~
-+----+----------+--------------------+
-| id | customer | address |
-+----+----------+--------------------+
-| 1 | Lauren | 123 Main Street |
-| 2 | Jesse | 456 Broad Ave |
-| 3 | Amruta | 9876 Green Parkway |
-| 5 | Marie | 333 Ocean Ave |
-| 20 | Lola | 333 Schermerhorn |
-+----+----------+--------------------+
-~~~
-
-## See also
-
-- [`RENAME SEQUENCE`](rename-sequence.html)
-- [`CREATE SEQUENCE`](create-sequence.html)
-- [`DROP SEQUENCE`](drop-sequence.html)
-- [Functions and Operators](functions-and-operators.html)
-- [Other SQL Statements](sql-statements.html)
-- [Online Schema Changes](online-schema-changes.html)
diff --git a/src/current/v2.1/alter-table.md b/src/current/v2.1/alter-table.md
deleted file mode 100644
index 19a209038cb..00000000000
--- a/src/current/v2.1/alter-table.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: ALTER TABLE
-summary: Use the ALTER TABLE statement to change the schema of a table.
-toc: true
----
-
-The `ALTER TABLE` [statement](sql-statements.html) applies a schema change to a table.
-
-{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %}
-
-## Subcommands
-
-For information on using `ALTER TABLE`, see the documents for its relevant subcommands.
-
-Subcommand | Description
------------|------------
-[`ADD COLUMN`](add-column.html) | Add columns to tables.
-[`ADD CONSTRAINT`](add-constraint.html) | Add constraints to columns.
-[`ALTER COLUMN`](alter-column.html) | Change or drop a column's [`DEFAULT` constraint](default-value.html) or drop the [`NOT NULL` constraint](not-null.html).
-[`ALTER TYPE`](alter-type.html) | New in v2.1: Change a column's [data type](data-types.html).
-[`CONFIGURE ZONE`](configure-zone.html) | New in v2.1: [Configure replication zones](configure-replication-zones.html) for a table.
-[`DROP COLUMN`](drop-column.html) | Remove columns from tables.
-[`DROP CONSTRAINT`](drop-constraint.html) | Remove constraints from columns.
-[`EXPERIMENTAL_AUDIT`](experimental-audit.html) | Enable per-table audit logs.
-[`PARTITION BY`](partition-by.html) | Repartition or unpartition a table with partitions ([Enterprise-only](enterprise-licensing.html)).
-[`RENAME COLUMN`](rename-column.html) | Change the names of columns.
-[`RENAME TABLE`](rename-table.html) | Change the names of tables.
-[`SPLIT AT`](split-at.html) | Force a key-value layer range split at the specified row in the table.
-[`VALIDATE CONSTRAINT`](validate-constraint.html) | Check whether values in a column match a [constraint](constraints.html) on the column.
-
-## Viewing schema changes
-
-{% include {{ page.version.version }}/misc/schema-change-view-job.md %}
diff --git a/src/current/v2.1/alter-type.md b/src/current/v2.1/alter-type.md
deleted file mode 100644
index c6aa5ed56ee..00000000000
--- a/src/current/v2.1/alter-type.md
+++ /dev/null
@@ -1,78 +0,0 @@
----
-title: ALTER TYPE
-summary: Use the ALTER TYPE statement to change a column's data type.
-toc: true
----
-
-New in v2.1: The `ALTER TYPE` [statement](sql-statements.html) is part of [`ALTER TABLE`](alter-table.html) and changes a column's [data type](data-types.html).
-
-## Considerations
-
-You can use the `ALTER TYPE` subcommand if the following conditions are met:
-
-- On-disk representation of the column remains unchanged. For example, you cannot change the column data type from `STRING` to an `INT`, even if the string is just a number.
-- The existing data remains valid. For example, you can change the column data type from `STRING[10]` to `STRING[20]`, but not to `STRING [5]` since that will invalidate the existing data.
-
-## Synopsis
-
-
-{% include {{ page.version.version }}/sql/diagrams/alter_type.html %}
-
-
-## Required privileges
-
-The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the table.
-
-## Parameters
-
-| Parameter | Description
-|-----------|-------------
-| `table_name` | The name of the table with the column whose data type you want to change.
-| `column_name` | The name of the column whose data type you want to change.
-| `typename` | The new [data type](data-types.html) you want to use.
-
-## Examples
-
-### Success scenario
-
-The [TPC-C](performance-benchmarking-with-tpc-c.html) database has a `customer` table with a column `c_credit_lim DECIMAL (10,2)`. Suppose you want to change the data type to `DECIMAL (12,2)`:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE customer ALTER c_credit_lim type DECIMAL (12,2);
-~~~
-
-~~~
-ALTER TABLE
-
-Time: 80.814044ms
-~~~
-
-### Error scenarios
-
-Changing a column data type from `DECIMAL` to `INT` would change the on-disk representation of the column. Therefore, attempting to do so results in an error:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE customer ALTER c_credit_lim type INT;
-~~~
-
-~~~
-pq: type conversion not yet implemented
-~~~
-
-Changing a column data type from `DECIMAL(12,2)` to `DECIMAL (8,2)` would invalidate the existing data. Therefore, attempting to do so results in an error:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER TABLE customer ALTER c_credit_lim type DECIMAL (8,2);
-~~~
-
-~~~
-pq: type conversion not yet implemented
-~~~
-
-## See also
-
-- [`ALTER TABLE`](alter-table.html)
-- [Other SQL Statements](sql-statements.html)
diff --git a/src/current/v2.1/alter-user.md b/src/current/v2.1/alter-user.md
deleted file mode 100644
index db54cbb9b7f..00000000000
--- a/src/current/v2.1/alter-user.md
+++ /dev/null
@@ -1,84 +0,0 @@
----
-title: ALTER USER
-summary: The ALTER USER statement can be used to add or change a user's password.
-toc: true
----
-
-The `ALTER USER` [statement](sql-statements.html) can be used to add or change a [user's](create-and-manage-users.html) password.
-
-{{site.data.alerts.callout_success}}
-You can also use the [`cockroach user`](create-and-manage-users.html#update-a-users-password) command to add or change a user's password.
-{{site.data.alerts.end}}
-
-
-## Considerations
-
-- Password creation and alteration is supported only in secure clusters for non-`root` users.
-
-## Required privileges
-
-The user must have the `INSERT` and `UPDATE` [privileges](authorization.html#assign-privileges) on the `system.users` table.
-
-## Synopsis
-
-
{% include {{ page.version.version }}/sql/diagrams/alter_user_password.html %}
-
-## Parameters
-
-
-
-Parameter | Description
-----------|-------------
-`name` | The name of the user whose password you want to create or add.
-`password` | Let the user [authenticate their access to a secure cluster](authentication.html#client-authentication) using this new password. Passwords should be entered as [string literal](sql-constants.html#string-literals). For compatibility with PostgreSQL, a password can also be entered as an [identifier](#change-password-using-an-identifier), although this is discouraged.
-
-## Examples
-
-### Change password using a string literal
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER USER carl WITH PASSWORD 'ilov3beefjerky';
-~~~
-~~~
-ALTER USER 1
-~~~
-
-### Change password using an identifier
-
-The following statement changes the password to `ilov3beefjerky`, as above:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER USER carl WITH PASSWORD ilov3beefjerky;
-~~~
-
-This is equivalent to the example in the previous section because the password contains only lowercase characters.
-
-In contrast, the following statement changes the password to `thereisnotomorrow`, even though the password in the syntax contains capitals, because identifiers are normalized automatically:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER USER carl WITH PASSWORD ThereIsNoTomorrow;
-~~~
-
-To preserve case in a password specified using identifier syntax, use double quotes:
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER USER carl WITH PASSWORD "ThereIsNoTomorrow";
-~~~
-
-## See also
-
-- [`cockroach user` command](create-and-manage-users.html)
-- [`DROP USER`](drop-user.html)
-- [`SHOW USERS`](show-users.html)
-- [`GRANT `](grant.html)
-- [`SHOW GRANTS`](show-grants.html)
-- [Create Security Certificates](create-security-certificates.html)
-- [Other SQL Statements](sql-statements.html)
diff --git a/src/current/v2.1/alter-view.md b/src/current/v2.1/alter-view.md
deleted file mode 100644
index 7a1cc3a6a40..00000000000
--- a/src/current/v2.1/alter-view.md
+++ /dev/null
@@ -1,81 +0,0 @@
----
-title: ALTER VIEW
-summary: The ALTER VIEW statement changes the name of a view.
-toc: true
----
-
-The `ALTER VIEW` [statement](sql-statements.html) changes the name of a [view](views.html).
-
-{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %}
-
-{{site.data.alerts.callout_info}}
-It is not currently possible to change the `SELECT` statement executed by a view. Instead, you must drop the existing view and create a new view. Also, it is not currently possible to rename a view that other views depend on, but this ability may be added in the future (see [this issue](https://github.com/cockroachdb/cockroach/issues/10083)).
-{{site.data.alerts.end}}
-
-## Required privileges
-
-The user must have the `DROP` [privilege](authorization.html#assign-privileges) on the view and the `CREATE` privilege on the parent database.
-
-## Synopsis
-
-
- {% include {{ page.version.version }}/sql/diagrams/alter_view.html %}
-
-
-## Parameters
-
-Parameter | Description
-----------|------------
-`IF EXISTS` | Rename the view only if a view of `view_name` exists; if one does not exist, do not return an error.
-`view_name` | The name of the view to rename. To find view names, use:
`SELECT * FROM information_schema.tables WHERE table_type = 'VIEW';`
-`name` | The new [`name`](sql-grammar.html#name) for the view, which must be unique to its database and follow these [identifier rules](keywords-and-identifiers.html#identifiers).
-
-## Example
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW';
-~~~
-
-~~~
-+---------------+-------------------+--------------------+------------+---------+
-| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION |
-+---------------+-------------------+--------------------+------------+---------+
-| def | bank | user_accounts | VIEW | 2 |
-| def | bank | user_emails | VIEW | 1 |
-+---------------+-------------------+--------------------+------------+---------+
-(2 rows)
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> ALTER VIEW bank.user_emails RENAME TO bank.user_email_addresses;
-~~~
-
-{% include copy-clipboard.html %}
-~~~
-> RENAME VIEW
-~~~
-
-{% include copy-clipboard.html %}
-~~~ sql
-> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW';
-~~~
-
-~~~
-+---------------+-------------------+----------------------+------------+---------+
-| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION |
-+---------------+-------------------+----------------------+------------+---------+
-| def | bank | user_accounts | VIEW | 2 |
-| def | bank | user_email_addresses | VIEW | 3 |
-+---------------+-------------------+----------------------+------------+---------+
-(2 rows)
-~~~
-
-## See also
-
-- [Views](views.html)
-- [`CREATE VIEW`](create-view.html)
-- [`SHOW CREATE`](show-create.html)
-- [`DROP VIEW`](drop-view.html)
-- [Online Schema Changes](online-schema-changes.html)
diff --git a/src/current/v2.1/architecture/distribution-layer.md b/src/current/v2.1/architecture/distribution-layer.md
deleted file mode 100644
index 5eb2c10d267..00000000000
--- a/src/current/v2.1/architecture/distribution-layer.md
+++ /dev/null
@@ -1,186 +0,0 @@
----
-title: Distribution Layer
-summary: The distribution layer of CockroachDB's architecture provides a unified view of your cluster's data.
-toc: true
----
-
-The distribution layer of CockroachDB's architecture provides a unified view of your cluster's data.
-
-{{site.data.alerts.callout_info}}
-If you haven't already, we recommend reading the [Architecture Overview](overview.html).
-{{site.data.alerts.end}}
-
-## Overview
-
-To make all data in your cluster accessible from any node, CockroachDB stores data in a monolithic sorted map of key-value pairs. This key-space describes all of the data in your cluster, as well as its location, and is divided into what we call "ranges", contiguous chunks of the key-space, so that every key can always be found in a single range.
-
-CockroachDB implements a sorted map to enable:
-
- - **Simple lookups**: Because we identify which nodes are responsible for certain portions of the data, queries are able to quickly locate where to find the data they want.
- - **Efficient scans**: By defining the order of data, it's easy to find data within a particular range during a scan.
-
-### Monolithic sorted map structure
-
-The monolithic sorted map is comprised of two fundamental elements:
-
-- System data, which include **meta ranges** that describe the locations of data in your cluster (among many other cluster-wide and local data elements)
-- User data, which store your cluster's **table data**
-
-#### Meta ranges
-
-The locations of all ranges in your cluster are stored in a two-level index at the beginning of your key-space, known as meta ranges, where the first level (`meta1`) addresses the second, and the second (`meta2`) addresses data in the cluster. Importantly, every node has information on where to locate the `meta1` range (known as its range descriptor, detailed below), and the range is never split.
-
-This meta range structure lets us address up to 4EiB of user data by default: we can address 2^(18 + 18) = 2^36 ranges; each range addresses 2^26 B, and altogether we address 2^(36+26) B = 2^62 B = 4EiB. However, with larger range sizes, it's possible to expand this capacity even further.
-
-Meta ranges are treated mostly like normal ranges and are accessed and replicated just like other elements of your cluster's KV data.
-
-Each node caches values of the `meta2` range it has accessed before, which optimizes access of that data in the future. Whenever a node discovers that its `meta2` cache is invalid for a specific key, the cache is updated by performing a regular read on the `meta2` range.
-
-#### Table data
-
-After the node's meta ranges is the KV data your cluster stores.
-
-Each table and its secondary indexes initially map to a single range, where each key-value pair in the range represents a single row in the table (also called the primary index because the table is sorted by the primary key) or a single row in a secondary index. As soon as a range reaches 64 MiB in size, it splits into two ranges. This process continues as a table and its indexes continue growing. Once a table is split across multiple ranges, it's likely that the table and secondary indexes will be stored in separate ranges. However, a range can still contain data for both the table and a secondary index.
-
-The default 64MiB range size represents a sweet spot for us between a size that's small enough to move quickly between nodes, but large enough to store a meaningfully contiguous set of data whose keys are more likely to be accessed together. These ranges are then shuffled around your cluster to ensure survivability.
-
-These table ranges are replicated (in the aptly named replication layer), and have the addresses of each replica stored in the `meta2` range.
-
-### Using the monolithic sorted map
-
-When a node receives a request, it looks at the meta ranges to find out which node it needs to route the request to by comparing the keys in the request to the keys in its `meta2` range.
-
-These meta ranges are heavily cached, so this is normally handled without having to send an RPC to the node actually containing the `meta2` ranges.
-
-The node then sends those KV operations to the leaseholder identified in the `meta2` range. However, it's possible that the data moved, in which case the node that no longer has the information replies to the requesting node where it's now located. In this case we go back to the `meta2` range to get more up-to-date information and try again.
-
-### Interactions with other layers
-
-In relationship to other layers in CockroachDB, the distribution layer:
-
-- Receives requests from the transaction layer on the same node.
-- Identifies which nodes should receive the request, and then sends the request to the proper node's replication layer.
-
-## Technical details and components
-
-### gRPC
-
-gRPC is the software nodes use to communicate with one another. Because the distribution layer is the first layer to communicate with other nodes, CockroachDB implements gRPC here.
-
-gRPC requires inputs and outputs to be formatted as protocol buffers (protobufs). To leverage gRPC, CockroachDB implements a protocol-buffer-based API defined in `api.proto`.
-
-For more information about gRPC, see the [official gRPC documentation](http://www.grpc.io/docs/guides/).
-
-### BatchRequest
-
-All KV operation requests are bundled into a [protobuf](https://en.wikipedia.org/wiki/Protocol_Buffers), known as a `BatchRequest`. The destination of this batch is identified in the `BatchRequest` header, as well as a pointer to the request's transaction record. (On the other side, when a node is replying to a `BatchRequest`, it uses a protobuf––`BatchResponse`.)
-
-This `BatchRequest` is also what's used to send requests between nodes using gRPC, which accepts and sends protocol buffers.
-
-### DistSender
-
-The gateway/coordinating node's `DistSender` receives `BatchRequest`s from its own `TxnCoordSender`. `DistSender` is then responsible for breaking up `BatchRequests` and routing a new set of `BatchRequests` to the nodes it identifies contain the data using its `meta2` ranges. It will use the cache to send the request to the leaseholder, but it's also prepared to try the other replicas, in order of "proximity." The replica that the cache says is the leaseholder is simply moved to the front of the list of replicas to be tried and then an RPC is sent to all of them, in order.
-
-Requests received by a non-leaseholder fail with an error pointing at the replica's last known leaseholder. These requests are retried transparently with the updated lease by the gateway node and never reach the client.
-
-As nodes begin replying to these commands, `DistSender` also aggregates the results in preparation for returning them to the client.
-
-### Meta range KV structure
-
-Like all other data in your cluster, meta ranges are structured as KV pairs. Both meta ranges have a similar structure:
-
-~~~
-metaX/successorKey -> LeaseholderAddress, [list of other nodes containing data]
-~~~
-
-Element | Description
---------|------------------------
-`metaX` | The level of meta range. Here we use a simplified `meta1` or `meta2`, but these are actually represented in `cockroach` as `\x02` and `\x03` respectively.
-`successorKey` | The first key *greater* than the key you're scanning for. This makes CockroachDB's scans efficient; it simply scans the keys until it finds a value greater than the key it's looking for, and that is where it finds the relevant data.
The `successorKey` for the end of a keyspace is identified as `maxKey`.
-`LeaseholderAddress` | The replica primarily responsible for reads and writes, known as the leaseholder. The replication layer contains more information about [leases](replication-layer.html#leases).
-
-Here's an example:
-
-~~~
-meta2/M -> node1:26257, node2:26257, node3:26257
-~~~
-
-In this case, the replica on `node1` is the leaseholder, and nodes 2 and 3 also contain replicas.
-
-#### Example
-
-Let's imagine we have an alphabetically sorted column, which we use for lookups. Here are what the meta ranges would approximately look like:
-
-1. `meta1` contains the address for the nodes containing the `meta2` replicas.
-
- ~~~
- # Points to meta2 range for keys [A-M)
- meta1/M -> node1:26257, node2:26257, node3:26257
-
- # Points to meta2 range for keys [M-Z]
- meta1/maxKey -> node4:26257, node5:26257, node6:26257
- ~~~
-
-2. `meta2` contains addresses for the nodes containing the replicas of each range in the cluster, the first of which is the [leaseholder](replication-layer.html#leases).
-
- ~~~
- # Contains [A-G)
- meta2/G -> node1:26257, node2:26257, node3:26257
-
- # Contains [G-M)
- meta2/M -> node1:26257, node2:26257, node3:26257
-
- #Contains [M-Z)
- meta2/Z -> node4:26257, node5:26257, node6:26257
-
- #Contains [Z-maxKey)
- meta2/maxKey-> node4:26257, node5:26257, node6:26257
- ~~~
-
-### Table data KV structure
-
-Key-value data, which represents the data in your tables using the following structure:
-
-~~~
-/
// ->
-~~~
-
-The table itself is stored with an `index_id` of 1 for its `PRIMARY KEY` columns, with the rest of the columns in the table considered as stored/covered columns.
-
-### Range descriptors
-
-Each range in CockroachDB contains metadata, known as a range descriptor. A range descriptor is comprised of the following:
-
-- A sequential RangeID
-- The keyspace (i.e., the set of keys) the range contains; for example, the first and last `