Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 42 additions & 0 deletions getting-started/assets/cloud_providers/await-s3.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#

ENDPOINT=$1
# "invalidKey" in combination with SigV4 means "public" access
KEY_ID=${2:-"invalidKey"}
SECRET=${3:-"secret"}
SLEEP=${4:-"1"}

if [ -z "$ENDPOINT" ]; then
echo Endpoint must be provided
exit 1
fi

# Make up to 30 attempts to list buckets. Success means the service is available
for i in `seq 1 30`; do
echo "Listing buckets at $ENDPOINT"
curl --user "$KEY_ID:$SECRET" --aws-sigv4 "aws:amz:us-west-1:s3" $ENDPOINT
if [[ "$?" == "0" ]]; then
echo
echo "$ENDPOINT is available"
break
fi
echo "Sleeping $SLEEP ..."
sleep $SLEEP
done
2 changes: 1 addition & 1 deletion getting-started/minio/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ bin/spark-sql \
--conf spark.sql.catalog.polaris.client.region=irrelevant
```

Note: `s3cr3t` is defined as the password for the `root` users in the `docker-compose.yml` file.
Note: `s3cr3t` is defined as the password for the `root` user in the `docker-compose.yml` file.

Note: The `client.region` configuration is required for the AWS S3 client to work, but it is not used in this example
since MinIO does not require a specific region.
Expand Down
103 changes: 103 additions & 0 deletions getting-started/ozone/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->

# Getting Started with Apache Polaris and Apache Ozone

## Overview

This example uses [Apache Ozone](https://ozone.apache.org/) as a storage provider with Polaris.

Spark is used as a query engine. This example assumes a local Spark installation.
See the [Spark Notebooks Example](../spark/README.md) for a more advanced Spark setup.

## Starting the Example

Start the docker compose group by running the following command from the root of the repository:

```shell
docker compose -f getting-started/minio/docker-compose.yml up
```

Note: this example pulls the `apache/polaris:latest` image, but assumes the image is `1.2.0-incubating` or later.

## Connecting From Spark

```shell
bin/spark-sql \
--packages org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.9.0,org.apache.iceberg:iceberg-aws-bundle:1.9.0 \
--conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions \
--conf spark.sql.catalog.polaris=org.apache.iceberg.spark.SparkCatalog \
--conf spark.sql.catalog.polaris.type=rest \
--conf spark.sql.catalog.polaris.uri=http://localhost:8181/api/catalog \
--conf spark.sql.catalog.polaris.token-refresh-enabled=false \
--conf spark.sql.catalog.polaris.warehouse=quickstart_catalog \
--conf spark.sql.catalog.polaris.scope=PRINCIPAL_ROLE:ALL \
--conf spark.sql.catalog.polaris.credential=root:s3cr3t \
--conf spark.sql.catalog.polaris.client.region=irrelevant
```

Note: `s3cr3t` is defined as the password for the `root` user in the `docker-compose.yml` file.

Note: The `client.region` configuration is required for the AWS S3 client to work, but it is not used in
this example since Ozone does not require a specific region.

## Running Queries

Run inside the Spark SQL shell:

```
spark-sql (default)> use polaris;
Time taken: 0.837 seconds

spark-sql ()> create namespace ns;
Time taken: 0.374 seconds

spark-sql ()> create table ns.t1 as select 'abc';
Time taken: 2.192 seconds

spark-sql ()> select * from ns.t1;
abc
Time taken: 0.579 seconds, Fetched 1 row(s)
```

## Lack of Credential Vending

Notice that the Spark configuration does not contain a `X-Iceberg-Access-Delegation` header.
This is because Ozone does not support the STS API and consequently cannot produce session
credentials to be vended to Polaris clients.

The lack of STS API is represented in the Catalog storage configuration by the
`stsUnavailable=false` property.

## S3 Credentials

In this example Ozone does not require credentials for accessing its S3 API. Therefore, neither
Polaris, not Spark use any S3 access keys.

If Ozone were configured to require credentials, Spark and Polaris would have to their own separate
S3 access key / secret properties because credential vending is not possible with Ozone 2.0.0.

## S3 Endpoints

Note that the catalog configuration defined in the `docker-compose.yml` contains
different endpoints for the Polaris Server and the client (Spark). Specifically,
the client endpoint is `http://localhost:9878`, but `endpointInternal` is `http://ozone-s3g:9878`.

This is necessary because clients running on `localhost` do not normally see service
names (such as `ozone-s3g`) that are internal to the docker compose environment.
131 changes: 131 additions & 0 deletions getting-started/ozone/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#

services:

ozone-datanode:
image: &ozone-image apache/ozone:2.0.0
ports:
- 9864
command: ["ozone","datanode"]
environment:
&ozone-common-config
OZONE-SITE.XML_hdds.datanode.dir: "/data/hdds"
OZONE-SITE.XML_ozone.metadata.dirs: "/data/metadata"
OZONE-SITE.XML_ozone.om.address: "ozone-om"
OZONE-SITE.XML_ozone.om.http-address: "ozone-om:9874"
OZONE-SITE.XML_ozone.recon.address: "ozone-recon:9891"
OZONE-SITE.XML_ozone.recon.db.dir: "/data/metadata/recon"
OZONE-SITE.XML_ozone.replication: "1"
OZONE-SITE.XML_ozone.scm.block.client.address: "ozone-scm"
OZONE-SITE.XML_ozone.scm.client.address: "ozone-scm"
OZONE-SITE.XML_ozone.scm.datanode.id.dir: "/data/metadata"
OZONE-SITE.XML_ozone.scm.names: "ozone-scm"
no_proxy: "ozone-om,ozone-recon,ozone-scm,ozone-s3g,localhost,127.0.0.1"
ozone-om:
image: *ozone-image
ports:
- 9874:9874
environment:
<<: *ozone-common-config
CORE-SITE.XML_hadoop.proxyuser.hadoop.hosts: "*"
CORE-SITE.XML_hadoop.proxyuser.hadoop.groups: "*"
ENSURE_OM_INITIALIZED: /data/metadata/om/current/VERSION
WAITFOR: ozone-scm:9876
command: ["ozone","om"]
ozone-scm:
image: *ozone-image
ports:
- 9876:9876
environment:
<<: *ozone-common-config
ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION
command: ["ozone","scm"]
ozone-recon:
image: *ozone-image
ports:
- 9888:9888
environment:
<<: *ozone-common-config
command: ["ozone","recon"]
ozone-s3g:
image: *ozone-image
ports:
- 9878:9878
environment:
<<: *ozone-common-config
command: ["ozone","s3g"]

polaris:
image: apache/polaris:latest
ports:
# API port
- "8181:8181"
# Optional, allows attaching a debugger to the Polaris JVM
- "5005:5005"
environment:
JAVA_DEBUG: true
JAVA_DEBUG_PORT: "*:5005"
AWS_REGION: us-west-2
AWS_ACCESS_KEY_ID: minio_root
AWS_SECRET_ACCESS_KEY: m1n1opwd
POLARIS_BOOTSTRAP_CREDENTIALS: POLARIS,root,s3cr3t
polaris.realm-context.realms: POLARIS
quarkus.otel.sdk.disabled: "true"
healthcheck:
test: ["CMD", "curl", "http://localhost:8182/q/health"]
interval: 2s
timeout: 10s
retries: 10
start_period: 10s

polaris-setup:
image: alpine/curl
depends_on:
polaris:
condition: service_healthy
environment:
- CLIENT_ID=root
- CLIENT_SECRET=s3cr3t
volumes:
- ../assets/:/assets/
entrypoint: "/bin/sh"
command:
- "-c"
- >-
/assets/cloud_providers/await-s3.sh http://ozone-s3g:9878/ ;
source /assets/polaris/obtain-token.sh;
echo Creating bucket...;
curl -X PUT --user "invalidKey:secret" --aws-sigv4 "aws:amz:us-west-1:s3" \
http://ozone-s3g:9878/bucket123 ;
echo Creating catalog...;
export STORAGE_CONFIG_INFO='{"storageType":"S3",
"endpoint":"http://localhost:9878",
"endpointInternal":"http://ozone-s3g:9878",
"stsUnavailable":true,
"pathStyleAccess":true}';
export STORAGE_LOCATION='s3://bucket123';
/assets/polaris/create-catalog.sh POLARIS $$TOKEN;
echo Extra grants...;
curl -H "Authorization: Bearer $$TOKEN" -H 'Content-Type: application/json' \
-X PUT \
http://polaris:8181/api/management/v1/catalogs/quickstart_catalog/catalog-roles/catalog_admin/grants \
-d '{"type":"catalog", "privilege":"CATALOG_MANAGE_CONTENT"}';
echo Done.;