diff --git a/docs/feature-management-experimentation/60-experimentation/experiment-results/index.md b/docs/feature-management-experimentation/60-experimentation/experiment-results/index.md index aaf9ef76ece..c1999297327 100644 --- a/docs/feature-management-experimentation/60-experimentation/experiment-results/index.md +++ b/docs/feature-management-experimentation/60-experimentation/experiment-results/index.md @@ -9,22 +9,22 @@ Understanding how your experiment is performing, and whether it's driving meanin Review your experiment's metrics and overall status. Explore metric-level details, explore trends, and learn how your results are calculated. -For more information, see [Viewing experiment results](./viewing-experiment-results). +For more information, see [Viewing experiment results](/docs/feature-management-experimentation/experimentation/experiment-results/viewing-experiment-results). ## Analyze experiment results Drill down into experiment details to validate setup, explore user behavior, and identify potential issues. -For more information, see [Analyzing experiment results](./analyzing-experiment-results). +For more information, see [Analyzing experiment results](/docs/feature-management-experimentation/experimentation/experiment-results/analyzing-experiment-results). ## Reallocate traffic Once you've analyzed your results, you can adjust your rollout strategy by shifting users between treatments or rolling out to 100% of your users. -For more information, see [Reallocating traffic](./reallocate-traffic). +For more information, see [Reallocating traffic](/docs/feature-management-experimentation/experimentation/experiment-results/reallocate-traffic). ## Share results You can download key metrics, trends, and impact summaries in CSV or JSON format for offline analysis or sharing with teammates. - For more information, see [Sharing experiment results](./sharing-experiment-results/). \ No newline at end of file + For more information, see [Sharing experiment results](/docs/feature-management-experimentation/experimentation/experiment-results/sharing-experiment-results/). \ No newline at end of file diff --git a/docs/feature-management-experimentation/60-experimentation/experiment-results/viewing-experiment-results/metric-details-and-trends.md b/docs/feature-management-experimentation/60-experimentation/experiment-results/viewing-experiment-results/metric-details-and-trends.md index 11401c3a7ab..ea3f49ba286 100644 --- a/docs/feature-management-experimentation/60-experimentation/experiment-results/viewing-experiment-results/metric-details-and-trends.md +++ b/docs/feature-management-experimentation/60-experimentation/experiment-results/viewing-experiment-results/metric-details-and-trends.md @@ -69,7 +69,7 @@ On the impact snapshot chart, you can analyze data for ___key metrics___ using [ * **Run more data-driven experiments.** Iterate on your next hypotheses or run follow-up experiments using the insights gained on what worked or didn’t in past experiments. :::info -[Multiple comparison correction](../../key-concepts/multiple-comparison-correction) is not applied to dimensional analysis. +[Multiple comparison correction](/docs/feature-management-experimentation/experimentation/key-concepts/multiple-comparison-correction) is not applied to dimensional analysis. ::: Before you can select a _dimension_ to analyze on the metric Impact snapshot, you need to send a corresponding _[event property](/docs/feature-management-experimentation/experimentation/events/#event-properties)_, for the event measured by the metric. (You can set event properties in code when you call the FME SDK's `track` method.) An Admin also needs to [configure dimensions and values](/docs/feature-management-experimentation/experimentation/experiment-results/analyzing-experiment-results/dimensional-analysis/#configuring-dimensions-and-values) to show them in the Select a dimension dropdown. diff --git a/docs/feature-management-experimentation/shared/metrics/index.md b/docs/feature-management-experimentation/shared/metrics/index.md index fa4f4fb6f45..43175a4a197 100644 --- a/docs/feature-management-experimentation/shared/metrics/index.md +++ b/docs/feature-management-experimentation/shared/metrics/index.md @@ -1,6 +1,6 @@ -A metric measures [events](https://help.split.io/hc/en-us/articles/360020585772) that are sent to Harness FME. Metrics can be defined to count the occurrence of events, measure event values, or measure event properties. +A metric measures [events](/docs/feature-management-experimentation/experimentation/events/) that are sent to Harness FME. Metrics can be defined to count the occurrence of events, measure event values, or measure event properties. -Metric results are calculated for each treatment of a feature flag that shares the same traffic type as the metric and has a percentage targeting rule applied. Impact can be calculated between a selected comparison treatment and baseline treatment within a feature flag. Results are displayed on the [Metrics impact tab](https://help.split.io/hc/en-us/articles/360020844451) of the feature flag. +Metric results are calculated for each treatment of a feature flag that shares the same traffic type as the metric and has a percentage targeting rule applied. Impact can be calculated between a selected comparison treatment and baseline treatment within a feature flag. ### Common metrics @@ -37,7 +37,7 @@ In the table below, we assume the traffic type selected for the metric is `user` ## Metric categories -For more information about metric categories, see [Metric categorization](./categories/). +For more information about metric categories, see [Metric categorization](/docs/feature-management-experimentation/experimentation/metrics/categories/). ## Configure an alert policy @@ -45,7 +45,7 @@ You can set an alert policy for a metric and Harness FME will notify you if a fe ## Audit logs -Audit logs are captured every time the metric's definition or alert policy is changed. For more information, review the [Audit logs](https://help.split.io/hc/en-us/articles/360020579472-Audit-logs) guide. +Audit logs are captured every time the metric's definition or alert policy is changed. For more information, review the [Audit logs](/docs/feature-management-experimentation/management-and-administration/account-settings/audit-logs/) guide. ## Metric list diff --git a/docs/feature-management-experimentation/warehouse-native/experiment-results/analyze-experiment-results/health-check.md b/docs/feature-management-experimentation/warehouse-native/experiment-results/analyze-experiment-results/health-check.md new file mode 100644 index 00000000000..94199d7a45d --- /dev/null +++ b/docs/feature-management-experimentation/warehouse-native/experiment-results/analyze-experiment-results/health-check.md @@ -0,0 +1,8 @@ +--- +title: Health Check +sidebar_position: 40 +--- + +import HealthCheck from '/docs/feature-management-experimentation/60-experimentation/experiment-results/analyzing-experiment-results/health-check.md'; + + \ No newline at end of file diff --git a/docs/feature-management-experimentation/warehouse-native/experiment-results/analyze-experiment-results/index.md b/docs/feature-management-experimentation/warehouse-native/experiment-results/analyze-experiment-results/index.md new file mode 100644 index 00000000000..7929fa99e05 --- /dev/null +++ b/docs/feature-management-experimentation/warehouse-native/experiment-results/analyze-experiment-results/index.md @@ -0,0 +1,8 @@ +--- +title: Analyze Experiment Results +sidebar_position: 20 +--- + +import AnalyzeResults from '/docs/feature-management-experimentation/60-experimentation/experiment-results/analyzing-experiment-results/index.md'; + + \ No newline at end of file diff --git a/docs/feature-management-experimentation/warehouse-native/experiment-results/index.md b/docs/feature-management-experimentation/warehouse-native/experiment-results/index.md new file mode 100644 index 00000000000..c2b62d24ff6 --- /dev/null +++ b/docs/feature-management-experimentation/warehouse-native/experiment-results/index.md @@ -0,0 +1,43 @@ +--- +title: Warehouse Native Experimentation Results +sidebar_label: Warehouse Native Experiment Results +description: Analyze your experiment results in Harness FME. +sidebar_position: 5 +--- + + + +## Overview + +Understanding how your experiment is performing, and whether it's driving meaningful impact, is key to making confident, data-informed product decisions. Warehouse Native experiment results help you interpret metrics derived directly from your data warehouse, assess experiment health, and share validated outcomes with stakeholders. + +## View experiment results + +Review key experiment metrics and overall significance in Harness FME. + +![](../static/view-results.png) + +Explore [how each metric performs](/docs/feature-management-experimentation/warehouse-native/experiment-results/view-experiment-results/) across treatments, inspect query-based data directly from your warehouse, and understand how results are calculated based on your metric definitions. + +## Analyze experiment results + +Drill down into experiment details to validate setup, confirm metric source alignment, and investigate user or account-level behavior. + +![](../static/view-metrics.png) + +Use [detailed metric breakdowns](/docs/feature-management-experimentation/warehouse-native/experiment-results/analyze-experiment-results/) to identify anomalies or confirm expected outcomes. + +## Share results + +Download experiment metrics, statistical summaries, and warehouse query outputs in CSV or JSON format for further analysis or collaboration with your team. + +![](../static/share-results.png) + +You can also share experiment results directly within Harness FME to maintain visibility across product, data, and engineering teams. \ No newline at end of file diff --git a/docs/feature-management-experimentation/warehouse-native/experiment-results/view-experiment-results/index.md b/docs/feature-management-experimentation/warehouse-native/experiment-results/view-experiment-results/index.md new file mode 100644 index 00000000000..e3df1542b4c --- /dev/null +++ b/docs/feature-management-experimentation/warehouse-native/experiment-results/view-experiment-results/index.md @@ -0,0 +1,78 @@ +--- +title: View Experiment Results +sidebar_position: 10 +--- + +## Overview + +You can view your experiment results from the **Experiments** page. This page provides a centralized view of all experiments and allows you to quickly access performance metrics, significance levels, and summary details for each treatment group. + +Click into any experiment to view detailed results, including the following: + +* Experiment metadata, such as: + + - Experiment name, owners, and tags + - Start and end dates + - Active targeting rule + - Total number of exposures + - Treatment group assignment counts and percentages + +* Treatment comparison, including: + + - The baseline treatment (e.g. `off`) + - One or more comparison treatments (e.g. `low`) + +## Use AI Summarize + +For faster interpretation of experiment outcomes, the Experiments page includes an **AI Summarize** button. This analyzes key and guardrail metric results to generate a summary of your experiment, making it easier to share results and next steps with your team. + +![Experiment Summary](../../static/summarize.png) + +The summary is broken into three sections: + +* **Winner Analysis**: Highlights whether a clear winner emerged across key metrics and guardrails. +* **Overall Impact Summary**: Summarizes how the treatment impacted user behavior or business outcomes. +* **Next Steps Suggestion**: Recommends what to do next, whether to iterate, roll out, or revisit your setup. + +## Manually recalculating metrics + +You can manually run calculations on-demand by clicking the Recalculate button. Recalculations can be run for key metrics only, or for all metrics (key, guardrail, and supporting). **Most recalculations take up to five minutes, but can take longer, depending on the size of your data and the length of your experiment.** + +Reasons you may choose to recalculate metrics: + +* If you create or modify a metric after the last updated metric impact calculation, recalculate to get the latest results. +* If you assign a metric to the Key metrics or Supporting metrics groups, recalculate to populate results for those metrics. + +The **Recalculate** button will be disabled when: + +* **A forced recalculation is already scheduled.** A calculation is in progress. You can click the Recalculate button again, as soon as the currently running calculation finishes. + +## Concluding on interim data + +Although we show the statistical results for multiple interim points, we caution against drawing conclusions from interim data. Each interim point at which the data is analyzed has its own chance of bringing a false positive result, so looking at more points brings more chance of a false positive. For more information about statistical significance and false positives, see [Statistical significance](/docs/feature-management-experimentation/release-monitoring/metrics/statistical-significance/). + +If you were to look at all the p-values from the interim analysis points and claim a significant result if any of those were below your significance threshold, then you would have a substantially higher false positive rate than expected based on the threshold alone. For example, you would have far more than a 5% chance of seeing a falsely significant result when using a significance threshold of 0.05, if you concluded on any significant p-value shown in the metric details and trends view. This is because there are multiple chances for you to happen upon a time when the natural noise in the data happened to look like a real impact. + +For this reason, it is good practice to only draw conclusions from your experiment at the predetermined conclusion point(s), such as at the end of the review period. + +### Interpreting the line chart and trends + +The line chart provides a visualization of how the measured impact has changed since the beginning of the feature flag. This may be useful for gaining insights on any seasonality or for identifying any unexpected sudden changes in the performance of the treatments. + +However it is important to remember that there will naturally be noise and variation in the data, especially when the sample size is low at the beginning of a feature flag, so some differences in the measured impact over time are to be expected. + +Additionally, since the data is cumulative, it may be expected that the impact changes as the run time of your feature flag increases. For example, the fraction of users who have done an event may be expected to increase over time simply because the users have had more time to do the action. + +### Example Interpretation + +The image below shows the impact over time line chart for an example A/A test, a feature flag where there is no true difference between the performance of the treatments. Despite there being no difference between the treatments, and hence a constant true impact of zero, the line chart shows a large measured difference at the beginning, and an apparent trend upwards over time. + +This is due only to noise in the data at the early stages of the feature flag when the sample size is low, and the measured impact moving towards the true value as more data arrives. + +![Line Chart](../../static/line-chart.png) + +Note also that in the chart above there are 3 calculation buckets for which the error margin is entirely below zero, and hence the p-values at those points in time would imply a statistically significant impact. This is again due to noise and the unavoidable chance of false positive results. + +If you weren't aware of the risk of peeking at the data, or of considering multiple evaluations of your feature flag at different points in time, then you may have concluded that a meaningful impact had been detected. However, by following the recommended practice of concluding only at the predetermined end time of your feature flag you would eventually have seen a statistically inconclusive result as expected for an A/A test. + +If you have questions or need help troubleshooting, contact [support@split.io](mailto:support@split.io). diff --git a/docs/feature-management-experimentation/warehouse-native/index.md b/docs/feature-management-experimentation/warehouse-native/index.md new file mode 100644 index 00000000000..e6208221c61 --- /dev/null +++ b/docs/feature-management-experimentation/warehouse-native/index.md @@ -0,0 +1,114 @@ +--- +title: Warehouse Native Experimentation +id: index +slug: /feature-management-experimentation/warehouse-native +sidebar_label: Overview +sidebar_position: 1 +description: Learn how to run experiments in your data warehouse using Harness Feature Management & Experimentation (FME). +--- + + + +## Overview + +Warehouse Native enables [experimentation](/docs/feature-management-experimentation/experimentation/setup/) workflows, from targeting and assignment to analysis, and provides a statistical engine for analyzing existing experiments with measurement tools in Harness Feature Management & Experimentation (FME). + +## How Warehouse Native works + +Warehouse Native runs experimentation jobs directly in your data warehouse by using your existing data to calculate metrics and enrich experiment analyses. + +![](./static/data-flow.png) + +The data model is designed around two two primary types of data: **assignment data** and **performance/behavioral data**, which power the FME statistical engine in your warehouse. + +Key components include: + +- **Assignment data**: Tracks user or entity assignments to experiments. This includes metadata about the experiment. +- **Performance and behavioral data**: Captures metrics, events, and user behavior relevant to the experiment. +- **Experiment metadata**: Contains definitions for experiments, including the experiment ID, name, start/end dates, traffic allocation, and grouping logic. +- **Metric definitions**: Defines how metrics are computed in the warehouse, including aggregation logic and denominators. These definitions ensure analyses are standardized across experiments. + +### Cloud Experimentation + +Cloud Experiments are executed and analyzed within Harness FME, which collects feature flag impressions and performance data from your application and integrations. For more information, see the [Cloud Experimentation documentation](/docs/feature-management-experimentation/experimentation). + +```mermaid +flowchart LR + %% Customer infrastructure + subgraph CI["Customer Infrastructure"] + direction TB + subgraph APP["Your Application"] + FME["FME SDK"] + style FME fill:#9b5de5,stroke:#9b5de5,color:#fff + end + + integrations["Integrations including Google Analytics, Segment, Sentry, mParticle, Amplitude, and Amazon S3"] + style integrations fill:none,stroke:none,color:#fff + end + style CI fill:#8110B5,stroke:#8110B5,color:#fff + + %% Harness FME System + subgraph HFM["Harness FME"] + direction TB + + %% Horizontal input boxes without a subgraph + FF["FME Feature Flags"] + PD["Performance and behavioral data"] + style FF fill:#9b5de5,stroke:#9b5de5,color:#fff + style PD fill:#9b5de5,stroke:#9b5de5,color:#fff + + AE["FME Attribution Engine"] + style AE fill:#9b5de5,stroke:#9b5de5,color:#fff + + %% Connect inputs to Attribution Engine + FF --> AE + PD --> AE + end + style HFM fill:#8110B5,stroke:#8110B5,color:#fff + + %% Arrows from Customer Infra to input boxes + CI -- "Feature flag impression data" --> FF + CI -- "Performance and additional event data" --> PD +``` + +### Warehouse Native + +Warehouse Native Experiments are executed directly in your data warehouse, leveraging assignment and behavioral data from Harness FME to calculate metrics and run statistical analyses at scale. + +```mermaid +flowchart LR + subgraph DW["Data Warehouse"] + style DW fill:#8110B5,stroke:#8110B5,color:#fff + direction TB + AF["Assignment and FME feature flag data"] + PB["Performance and behavioral data"] + AE["FME Attribution Engine"] + style AF fill:#9b5de5,stroke:#9b5de5,color:#fff + style PB fill:#9b5de5,stroke:#9b5de5,color:#fff + style AE fill:#9b5de5,stroke:#9b5de5,color:#fff + end + + subgraph HFME[" "] + direction TB + HFM["Harness FME"] + PAD1[" "]:::invisible + PAD2[" "]:::invisible + end + + classDef invisible fill:none,stroke:none; + style HFM fill:#8110B5,stroke:#8110B5,color:#fff + + DW --> HFM + +``` + +## Get started + +To get started, [connect a data warehouse](/docs/feature-management-experimentation/warehouse-native/integrations/) and set up [assignment and metric sources](/docs/feature-management-experimentation/warehouse-native/setup/) to enable Warehouse Native Experimentation in Harness FME. \ No newline at end of file diff --git a/docs/feature-management-experimentation/warehouse-native/integrations/amazon-redshift.md b/docs/feature-management-experimentation/warehouse-native/integrations/amazon-redshift.md new file mode 100644 index 00000000000..550da328069 --- /dev/null +++ b/docs/feature-management-experimentation/warehouse-native/integrations/amazon-redshift.md @@ -0,0 +1,132 @@ +--- +title: Connect Amazon Redshift +description: Learn how to integrate Amazon Redshift with Harness FME to enable Warehouse Native Experimentation. +sidebar_label: Amazon Redshift +sidebar_position: 2 +--- + + + +## Overview + +Warehouse Native Experimentation allows you to run experiments on data that already lives in your data warehouse. By connecting Harness FME directly to your Amazon Redshift instance, you can securely query and analyze experiment data from your source of truth. + +To begin, connect your Amazon Redshift instance as a data source through a direct connection or using [IAM role-based authentication](https://docs.aws.amazon.com/redshift/latest/mgmt/generating-user-credentials.html). + +### Prerequisites + +Ensure that you have the following before getting started: + +- Access to your organization's Redshift cluster endpoint and database +- An IAM role with appropriate read access to the database and schema containing experiment data, and write access to a results table +- A designated results table where experiment results are stored in Amazon Redshift + +## Setup + +Harness recommends the following best practices: + +- Use IAM Role authentication instead of static credentials. +- Restrict access to read-only privileges. +- Keep Redshift clusters within [secure VPCs](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-clusters-vpc.html) and use SSL connections. +- Regularly audit IAM Roles and access policies. + +To integrate Amazon Redshift as a data warehouse for Warehouse Native Experimentation: + +1. Select Redshift as your data warehouse. In the **Data Sources** tab of your Harness FME project, select **Redshift** from the list of supported data warehouses. +1. Enter the following connection details: + + | Field | Description | Example | + |---|---|---| + | Cluster Endpoint (Host) | The endpoint of your Redshift cluster. | `redshift-cluster.analytics.us-east-1.redshift.amazonaws.com` | + | Port | The port number used by your Redshift instance (by default, set to `5439`). | `5439` | + | Database | The database containing your experimentation data. | `experiments` | + | Schema | The schema within your database containing your experiment or metric tables. | `analytics` | + | IAM Role ARN | The IAM role with permissions to access your Redshift cluster. | `arn:aws:iam::123456789012:role/FMEAccessRole` | + | Results Table Name | The name of the table where experiment results are stored. | `FME_RESULTS` | + +1. Configure authentication. Harness FME supports [IAM role-based authentication](https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-authentication-access-control.html) for secure, temporary access to Redshift. + + * Create or use an existing IAM role with permissions to access the cluster. + * Attach a policy granting Redshift read access to relevant databases and schemas. + * Provide the IAM Role ARN in Harness FME. + +1. Select a database and a schema. After authentication, Harness FME retrieves the list of accessible databases and schemas based on your IAM Role permissions. Select the one containing your experiment exposure and event/metric data. +1. Specify a results table. Designate a results table where Harness FME will write experiment analysis results. Ensure the following: + + * The table exists in your database. + * The schema matches the expected format for experiment results below. + +
+ | Field | Type | Description | + |---|---|---| + | `METRICRESULTID` | `VARCHAR` | Unique identifier representing a specific calculation per metric, per experiment, per analysis run. | + | `TREATMENT` | `VARCHAR` | The experiment variant (e.g., Control or Treatment) associated with the metric results. | + | `DIMENSIONNAME` | `VARCHAR` | The name of the dimension being analyzed (e.g., country, platform). | + | `DIMENSIONVALUE` | `VARCHAR` | The corresponding value of the analyzed dimension. | + | `ATTRIBUTEDKEYSCOUNT` | `BIGINT` | Count of unique keys (users, sessions, etc.) attributed to this metric result. | + | `REQUESTTIMESTAMP` | `TIMESTAMP` | Timestamp when the metric computation request occurred. | + | `MIN` | `FLOAT8` | Minimum observed value for the metric. | + | `MAX` | `FLOAT8` | Maximum observed value for the metric. | + | `COUNT` | `BIGINT` | Total number of observations included in the metric calculation. | + | `SUM` | `FLOAT8` | Sum of all observed metric values. | + | `MEAN` | `FLOAT8` | Average (mean) of the metric values. | + | `P50` | `FLOAT8` | 50th percentile (median) metric value. | + | `P95` | `FLOAT8` | 95th percentile metric value. | + | `P99` | `FLOAT8` | 99th percentile metric value. | + | `VARIANCE` | `FLOAT8` | Variance of the metric values. | + | `EXCLUDEDUSERCOUNT` | `BIGINT` | Number of users excluded from the analysis (due to filters, SRM, etc.). | + | `ASOFTIMESTAMP` | `TIMESTAMP` | Timestamp representing when the result snapshot was written. | + + To create the results table with the correct structure, run the following SQL statement in Amazon Redshift: + + ```sql + CREATE TABLE IF NOT EXISTS .. ( + METRICRESULTID VARCHAR(256), + TREATMENT VARCHAR(256), + DIMENSIONNAME VARCHAR(256), + DIMENSIONVALUE VARCHAR(256), + ATTRIBUTEDKEYSCOUNT BIGINT, + REQUESTTIMESTAMP TIMESTAMP, + MIN FLOAT8, + MAX FLOAT8, + COUNT BIGINT, + SUM FLOAT8, + MEAN FLOAT8, + P50 FLOAT8, + P95 FLOAT8, + P99 FLOAT8, + VARIANCE FLOAT8, + EXCLUDEDUSERCOUNT BIGINT, + ASOFTIMESTAMP TIMESTAMP + ); + ``` + + + +1. Test the connection by clicking **Test Connection**. If the test fails, verify the following: + + * The IAM Role has the correct trust policy and permissions. + * The Redshift cluster is publicly accessible (or within a connected VPC). + * The correct database, schema, and port are entered. + +1. Save and activate. Once the test passes, click **Save** to create the connection. + +Your Redshift data source can now be used to create assignment and metric sources for Warehouse Native Experimentation. + +## Example Redshift configuration + +| Setting | Example | +| ----------------- | -------------------- | +| Cluster Endpoint | `redshift-cluster.analytics.us-east-1.redshift.amazonaws.com` | +| Port | `5439` | +| Database | `experiments` | +| Schema | `analytics` | +| IAM Role ARN | `arn:aws:iam::123456789012:role/FMEAccessRole` | +| Results Table | `FME_RESULTS` | diff --git a/docs/feature-management-experimentation/warehouse-native/integrations/index.md b/docs/feature-management-experimentation/warehouse-native/integrations/index.md new file mode 100644 index 00000000000..5f52e3e188b --- /dev/null +++ b/docs/feature-management-experimentation/warehouse-native/integrations/index.md @@ -0,0 +1,35 @@ +--- +title: Warehouse Native Experimentation Integrations +description: Learn how to connect your data warehouse with Harness FME to setup Warehouse Native Experimentation. +sidebar_label: Connect Your Data Warehouse +sidebar_position: 2 +--- + + + +## Overview + +To run experiment analyses using Warehouse Native, Harness FME connects directly to your data warehouse. This allows you to use your existing event and metric data without duplicating or moving it. + +Warehouse Native Experimentation requires the following permissions: + +* Read access to event, exposure, and metric tables +* Write access to a dedicated Harness schema for storing analysis results +* Permission to run queries and scheduled jobs + +## Supported integrations + +Warehouse Native Experimentation supports the following data warehouses: + +import { Section, dataWarehouses } from '@site/src/components/Docs/data/whnIntegrations'; + +
+ +Set up your connection, configure access, and apply recommended policies to start analyzing experiments in your data warehouse. \ No newline at end of file diff --git a/docs/feature-management-experimentation/warehouse-native/integrations/snowflake.md b/docs/feature-management-experimentation/warehouse-native/integrations/snowflake.md new file mode 100644 index 00000000000..c56dcb5cc9a --- /dev/null +++ b/docs/feature-management-experimentation/warehouse-native/integrations/snowflake.md @@ -0,0 +1,139 @@ +--- +title: Connect Snowflake +description: Learn how to integrate Snowflake with Harness FME to enable Warehouse Native Experimentation. +sidebar_label: Snowflake +sidebar_position: 1 +--- + + + +## Overview + +Warehouse Native Experimentation allows you to run experiments on data that already lives in your data warehouse. By connecting Harness FME directly to your Snowflake instance, you can securely query and analyze experiment data from your source of truth. + +To begin, connect your Snowflake instance as a data source. + +### Prerequisites + +Ensure that you have the following before getting started: + +- Access to your organization's Snowflake instance +- A Snowflake role with appropriate read access to the database and schema containing experiment data, and write access to a results table +- A private key and associated user configured for key-pair authentication +- A designated results table where experiment results are stored in Snowflake + +## Setup + +Harness recommends the following best practices: + +- Use a service account rather than a personal Snowflake user. +- Grant read-only access to the databases and schemas Harness FME queries. +- Rotate private keys periodically. +- If your Snowflake instance enforces [inbound restrictions](https://docs.snowflake.com/en/developer-guide/external-network-access/creating-using-external-network-access), confirm network and IP allowlisting. + +To integrate Snowflake as a data warehouse for Warehouse Native Experimentation: + +1. Select Snowflake as your data warehouse. In the **Data Sources** tab of your Harness FME project, select **Snowflake** from the list of supported data warehouses. +1. Enter the following connection details: + + | Field | Description | Example | + |---|---|---| + | Server (Account Identifier) | Your Snowflake account identifier or server URL. | `xy12345.us-west-2` | + | Warehouse | The compute warehouse Harness FME should use to execute queries. | `ANALYTICS_WH` | + | Database | The database containing your experimentation data. | `PROD_EXPERIMENTS` | + | Schema | The schema within your database containing your experiment or metric data. | `AB_TESTING` | + | Username | The Snowflake username tied to your private key. | `fme_service_user` | + | Role | The Snowflake role to assume for this connection. | `DATA_ANALYST` | + | Results Table Name | The name of the table where experiment results are stored. | `EXPERIMENT_RESULTS` | + + :::info Note + Harness FME respects Snowflake's built-in [role-based access controls](https://docs.snowflake.com/en/user-guide/security-access-control-privileges). The data source connection only has access to objects allowed for the specified role. + ::: + +1. Provide authentication credentials. Harness FME supports [key pair authentication](https://docs.snowflake.com/en/user-guide/key-pair-auth) for secure, password-less access. + + * Option 1: Paste your private key directly into the text field. + * Option 2: Upload a private key file. + + Ensure the key corresponds to the username provided and is not encrypted with a passphrase. + +1. Select a database and a schema. After authentication, you can browser available databases, schemas, and tables based on your role permissions. Select the database and schema that contain your assignment and metric source tables. +1. Specify a results table. Designate a results table where Harness FME will write experiment analysis results. Ensure the following: + + * The table exists in your database. + * The schema matches the expected format for experiment results below. + +
+ | Field | Type | Description | + |---|---|---| + | `METRICRESULTID` | `VARCHAR` | Unique identifier representing a specific calculation per metric, per experiment, per analysis run. | + | `TREATMENT` | `VARCHAR` | The experiment variant (e.g., Control or Treatment) associated with the metric results. | + | `DIMENSIONNAME` | `VARCHAR` | The name of the dimension being analyzed (e.g., country, platform). | + | `DIMENSIONVALUE` | `VARCHAR` | The corresponding value of the analyzed dimension. | + | `ATTRIBUTEDKEYSCOUNT` | `NUMBER` | Count of unique keys (users, sessions, etc.) attributed to this metric result. | + | `REQUESTTIMESTAMP` | `TIMESTAMP_NTZ` | Timestamp when the metric computation request occurred. | + | `MIN` | `FLOAT` | Minimum observed value for the metric. | + | `MAX` | `FLOAT` | Maximum observed value for the metric. | + | `COUNT` | `NUMBER` | Total number of observations included in the metric calculation. | + | `SUM` | `FLOAT` | Sum of all observed metric values. | + | `MEAN` | `FLOAT` | Average (mean) of the metric values. | + | `P50` | `FLOAT` | 50th percentile (median) metric value. | + | `P95` | `FLOAT` | 95th percentile metric value. | + | `P99` | `FLOAT` | 99th percentile metric value. | + | `VARIANCE` | `FLOAT` | Variance of the metric values. | + | `EXCLUDEDUSERCOUNT` | `NUMBER` | Number of users excluded from the analysis (due to filters, SRM, etc.). | + | `ASOFTIMESTAMP` | `TIMESTAMP_NTZ` | Timestamp representing when the result snapshot was written. | + + To create the results table with the correct structure, run the following SQL statement in Snowflake: + + ```sql + CREATE OR REPLACE TABLE .. ( + METRICRESULTID VARCHAR(16777216), + TREATMENT VARCHAR(16777216), + DIMENSIONNAME VARCHAR(16777216), + DIMENSIONVALUE VARCHAR(16777216), + ATTRIBUTEDKEYSCOUNT NUMBER(38,0), + REQUESTTIMESTAMP TIMESTAMP_NTZ(9), + MIN FLOAT, + MAX FLOAT, + COUNT NUMBER(38,0), + SUM FLOAT, + MEAN FLOAT, + P50 FLOAT, + P95 FLOAT, + P99 FLOAT, + VARIANCE FLOAT, + EXCLUDEDUSERCOUNT NUMBER(38,0), + ASOFTIMESTAMP TIMESTAMP_NTZ(9) + ); + ``` + +1. Test the connection by clicking **Test Connection**. Harness FME confirms the following: + + * The credentials and key pair are valid. + * The warehouse and role are accessible. + * The specified database and schema exist and are accessible. + +1. Save and activate. Once the test passes, click **Save** to create the connection. + +Your Snowflake data source can now be used to create assignment and metric sources for Warehouse Native Experimentation. + +## Example Snowflake configuration + +| Setting | Example | +| ----------------- | -------------------- | +| Vendor | Snowflake | +| Server | `xy12345.us-west-2` | +| Warehouse | `ANALYTICS_WH` | +| Database | `PROD_EXPERIMENTS` | +| Schema | `PUBLIC` | +| Username | `fme_service_user` | +| Role | `DATA_ANALYST` | +| Results Table | `EXPERIMENT_RESULTS` | diff --git a/docs/feature-management-experimentation/warehouse-native/setup/assignment-sources.md b/docs/feature-management-experimentation/warehouse-native/setup/assignment-sources.md new file mode 100644 index 00000000000..e77cc2d1ffc --- /dev/null +++ b/docs/feature-management-experimentation/warehouse-native/setup/assignment-sources.md @@ -0,0 +1,105 @@ +--- +title: Preparing Assignment Source Tables for Warehouse Native Experimentation +description: Learn how to prepare your assignment source tables in your data warehouse for Warehouse Native Experimentation. +sidebar_label: Prepare Assignment Source Tables +sidebar_position: 1 +--- + + + +## Overview + +To prepare Assignment Sources for Warehouse Native Experimentation, transform your raw exposure or impression logs into a clean, standardized table that serves as the foundation for experimentation analyses. + +This page describes the required fields, recommended fields, and best practices for preparing your assignment source tables. + +## Required columns + +Every Assignment Source table must include the following columns: + +| Column | Type | Description | +| ----------------------------- | ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **Unique Key** | `STRING` | Unique identifier for the unit of randomization (for example, `user_id`, `account_id`, or a custom key). Must be stable across the experiment duration. | +| **Exposure Timestamp** | `DATETIME` / `TIMESTAMP` | The precise time when the assignment occurred (for example, when an impression was logged, a flag evaluated, or `getTreatment` was called). | +| **Treatment (Variant Group)** | `STRING` | The assigned experiment variant (for example, `control`, `treatment_a`, `variant_1`). | + +:::info +These fields are mandatory. Without them, Warehouse Native cannot map exposures to experiment results. +::: + +## Recommended columns + +While not required, the following fields make debugging, filtering, and governance more efficient. + +| Column | Type | Description | +| ------------------------ | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **Experiment ID / Name** | `STRING` | Helps differentiate exposures when multiple experiments are logged in the same raw table. | +| **Targeting Rule** | `STRING` | Indicates which targeting rule or condition led to the assignment. Useful for audit and debugging. If you are using FME feature flag impressions, filter by a single targeting rule to ensure the experiment analyzes the intended population. | +| **Environment ID** | `STRING` | Allows filtering by environment (for example, `production`, `staging`). When configuring an assignment source in FME, you can map column values to a matching Harness environment or hard-code a single environment. When creating an experiment, it must be scoped to one environment. | +| **Traffic Type** | `STRING` | Distinguishes the unit type (for example, `user`, `account`, `anonymous visitor`). When configuring an assignment source, you can map column values or hard-code the environment. Each experiment must be scoped to one traffic type. | + +## Common raw table schemas + +Most organizations log impressions or exposures from feature flag evaluations, SDKs, or event pipelines. Below are common raw schemas and how to normalize them. + +### Feature Flag Evaluation Logs + +| **Example Raw Schema** | **Transformations** | +| --------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `user_id`
`flag_name`
`treatment`
`impression_time`
`environment`
`rule_id` | • Map `flag_name` values → `experiment_id` (if multiple flags correspond to the same experiment).
• Cast `evaluation_time` to `TIMESTAMP`.
• Deduplicate on `(user_id, experiment_id)` by keeping the earliest exposure. | + +### A/B Test Impression Logs + +| **Example Raw Schema** | **Transformations** | +| ------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `experiment_id`
`user_id`
`bucket` or `arm`
`impression_time` | • Standardize `bucket` → `treatment`.
• Standardize `impression_time` → `exposure_timestamp`.
• Deduplicate to keep only the first exposure per user per experiment. | + + +### Event Logging Pipelines (Custom Analytics Events) + +| **Example Raw Schema** | **Transformations** | +| ---------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `event_name`
`event_time`
`properties.experiment_id`
`properties.variant`
`properties.user_id` | • Flatten nested fields (`JSON` → explicit columns).
• Filter to only `event_name = 'experiment_exposure'`.
• Standardize column names to match required schema. | + +## Prepare your assignment table + +Follow these best practices for preparing your assignment table in your data warehouse. + +- **De-duplication**: Keep only the earliest exposure per user per experiment. For example: + + ```sql + QUALIFY ROW_NUMBER() OVER ( + PARTITION BY user_id, experiment_id + ORDER BY exposure_timestamp ASC + ) = 1 + ``` + +- **Consistent Variant Labels**: Standardize variant naming (`control`, `treatment`, `variant_1`) across experiments. Avoid null or empty strings; default to `control` if needed. + +- **Timestamps in UTC**: Store all exposure timestamps in UTC for consistent comparisons across regions. + +- **Stable Identifiers**: Use the same user or account key across Assignment Source and Metric Source tables. If your system logs multiple IDs (for example, `cookie_id` and `user_id`), choose the most stable one. + +- **Environment Separation**: If raw tables mix environments (for example, `staging` and `production`), add an `environment_id` column and filter accordingly. This prevents accidental inclusion of test data in production environments. + +- **Partitioning and Indexing**: Partition large tables by `DATE(exposure_timestamp)` to optimize query performance. Cluster or index by `experiment_id` and `user_id` for faster lookups. + +## Example prepared table schema + +| Column | Type | Example | +| ------------------ | --------- | ---------------------- | +| `user_id` | `STRING` | `abc123` | +| `experiment_id` | `STRING` | `checkout_flow_v2` | +| `treatment` | `STRING` | `control` | +| `exposure_timestamp` | `TIMESTAMP` | `2025-03-14T12:45:00Z` | +| `environment_id` | `STRING` | `prod` | +| `traffic_type` | `STRING` | `user` | + +Once your Assignment Source tables are prepared and validated, see [Setting Up an Assignment Source](/docs/feature-management-experimentation/warehouse-native/setup/) to connect them in Harness FME. \ No newline at end of file diff --git a/docs/feature-management-experimentation/warehouse-native/setup/configure-assignments.md b/docs/feature-management-experimentation/warehouse-native/setup/configure-assignments.md new file mode 100644 index 00000000000..390701864f1 --- /dev/null +++ b/docs/feature-management-experimentation/warehouse-native/setup/configure-assignments.md @@ -0,0 +1,171 @@ +--- +title: Configuring Assignment Sources for Warehouse Native Experimentation +description: Learn how to configure your assignment source tables in your data warehouse for Warehouse Native Experimentation. +sidebar_label: Configure Assignment Sources +sidebar_position: 3 +--- + + + +## Overview + +When creating or editing an Assignment Source, navigate to your project's settings: **Admin Settings** > **Project Settings** > **View Project** (for non-migrated orgs) or **FME Settings** > **Project Settings** > **View Project** (for migrated orgs). From there, you can define the assignment source table using either a **Table name** or a **SQL query**. + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + + + + +### Select a table + +:::info +Recommended if your data is already modeled into a clean impression/exposure table. +::: + +1. Select an existing table name directly from the schema. +1. Click **Test connection** to validate that Harness can query the table successfully before continuing. + +With Assignment Sources configured, you can confidently create experiments, knowing all exposures are correctly captured, standardized, and reusable across analyses. + + + + +### Use a custom SQL query + +:::info +Recommended for light data transformations (e.g., extracting values from JSON), joins across multiple tables, or scoping to a subset of data. +::: + +You must have permissions to access all tables referenced in your query, based on the role and credentials configured when setting up your warehouse connection. + +1. Write a SQL query that outputs the required fields. +1. After entering your query, click **Run query** to validate and preview results before proceeding. + + + + +Harness FME will preview the query output so you can confirm the correct fields are returned. + +## Add field mappings + +Define the following fields from your assignment source to Harness FME: + +| Field | Description | +|---|---| +| Unique Identifier | Maps to the column representing the unique key for user, account, or entity. | +| Impression Timestamp | Maps to the column representing when the user was assigned to a treatment. | +| Treatment | Maps to the column that stores the treatment or experiment variant (e.g., `control`, `variant_a`). | + +### Configure your environments + + + + +Select an environment column and map its values to Harness FME environments. For example, select the `ENV_NAME` column and map its values (`US-Prod`, `UK-Prod`) to your Harness project’s `Production` environment and map the `Stg` values (`US-Stg`, `UK-Stg`) to your Harness project’s `Staging` environment. + +This allows a single Assignment Source to span multiple environments. + + + + +Instead of selecting a column, set a fixed Harness FME environment for the entire Assignment Source (e.g., always `Production`). + +This is recommended if the entire source table is scoped to one environment. + + + + +### Configure your traffic types + +Similar to environments, traffic types can be set up in two ways: + + + + +Select a traffic type column (e.g., `ttid`) and map its values to Harness FME traffic types (e.g., `user`, `account`, or `anonymous`). + +This is recommended if the same Assignment Source covers multiple population types. + + + + +Instead of selecting a column, set a fixed Harness FME traffic type for the entire Assignment Source (e.g., always `account`). + +This is recommended if the entire source table is scoped to one population type. + + + + +### Additional configuration options + +* **Preview data**: Harness shows a preview of the data returned from your table or query so you can validate that the expected rows and columns are present. +* **Owners**: Assign one or more owners to make clear who is responsible for maintaining the Assignment Source. +* **Tags**: Add tags (e.g., by team, environment, or use case) to make sources easier to discover and organize. + +## Manage assignment sources + +Assignment Sources can be reusable and standardized, or tailored to individual experiments depending on your organization’s needs: + +* Reusable, standardized sources are recommended if you have a general impressions/exposures table. + + This approach makes setup faster and consistent across teams. Be mindful of potential query processing speed and warehouse costs when working with very large shared tables. + +* Custom per-experiment sources are recommended if you want to scope data more tightly for privacy, relevancy, or performance. + + Limits experiment creators to a specific subset of data, reducing query volume and potential data access concerns. + +Ultimately, it’s up to your organization whether to centralize around a single reusable source or create smaller, experiment-specific sources. Many teams use a mix of both strategies depending on scale and governance needs. + +Once you've set up the assignment sources that best fit your workflow, you can manage them directly in Harness FME. + +* **Edit**: You can update the table reference, query, or mappings as your data model evolves. Changes to an existing Assignment Source may disrupt any experiments that are actively using it. +* **Delete**: Remove outdated or misconfigured sources to reduce clutter and prevent accidental use. + +## Troubleshooting + +If you encounter issues when configuring an Assignment Source: + +
+Test Connection or Run Query Fails + +1. Ensure your table or SQL query is valid and accessible with the credentials tied to your warehouse connection. + +1. Check that you have permission to query all referenced schemas/tables. + +1. Verify that the schema and table names are spelled correctly. +
+ +
+No Data Appears in Preview + +If you are using a SQL query, try running it directly in your warehouse to confirm output. + +
+ +
+Column Not Detected or Missing + +Verify that your source table/query outputs the required columns: unique identifier, timestamp, and treatment. +
+ +
+Incorrect Environment or Traffic Type Mapping + +1. Double-check that each warehouse value (e.g., `UK-Prod`) is mapped to the correct Harness environment (e.g., `Production`). +1. If everything should map to one environment or type, consider using the hardcoded value option instead of column mapping. +
+ +
+Assignment Source Not Showing in Experiment Setup + +1. Make sure you clicked **Save** after configuration. +1. Confirm that the source hasn’t been deleted, disabled, or restricted to owners only. +
\ No newline at end of file diff --git a/docs/feature-management-experimentation/warehouse-native/setup/configure-metrics.md b/docs/feature-management-experimentation/warehouse-native/setup/configure-metrics.md new file mode 100644 index 00000000000..141ff3bf89d --- /dev/null +++ b/docs/feature-management-experimentation/warehouse-native/setup/configure-metrics.md @@ -0,0 +1,182 @@ +--- +title: Configuring Metric Sources for Warehouse Native Experimentation +description: Learn how to configure your metric source tables in your data warehouse for Warehouse Native Experimentation. +sidebar_label: Configure Metric Sources +sidebar_position: 4 +--- + + + +## Overview + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +When creating or editing a Metric Source, navigate to your project's settings: **Admin Settings** > **Project Settings** > **View Project** (for non-migrated orgs) or **FME Settings** > **Project Settings** > **View Project** (for migrated orgs). From there, you can define the metric source table using either a **Table name** or a **SQL query**. + + + + +### Select a table + +:::info +Recommended if your event data is already modeled into a clean event table. +::: + +1. Select an existing table name directly from the schema. +1. Click **Test connection** to validate that Harness can query the table successfully before continuing. + + + + + +### Use a custom SQL query + +:::info +Recommended for light data transformations (e.g., extracting values from JSON), joins across multiple tables, or filtering by event type. +::: + +You must have permissions to access all tables referenced in your query, based on the role and credentials configured when setting up your warehouse connection. + +1. Write a SQL query that outputs the required fields. +1. After entering your query, click **Run query** to validate and preview results before proceeding. + + + + +After setting up Metric Sources, you can create metric definitions to aggregate event data by type (i.e., count, sum, or average). With Metric Sources configured, your metrics remain consistent, standardized, and reusable across experiments and analyses. + +Harness FME will show a data preview so you can confirm the expected fields are returned. + +### Configure your environments + + + + +Select an environment column and map its values to Harness FME environments. + +For example, select the `ENV_NAME` column and map its values (`US-Prod`, `UK-Prod`) to your Harness project’s `Production` environment and map the `Stg` values (`US-Stg`, `UK-Stg`) to your Harness project’s `Staging` environment. + +This is recommended if a single metric source spans multiple environments. + + + + +Instead of selecting a column, set a fixed Harness FME environment for the entire Metric Source (e.g., always `Production`). + +This is recommended if the source is scoped to one environment. + + + + +### Configure your traffic types + +Similar to environments, traffic types can be set up in two ways: + + + + +Select a traffic type column (e.g., `ttid`) and map its values to Harness FME traffic types (e.g., `user`, `account`, or `anonymous`). + +This is recommended if the same Metric Source covers multiple population types. + + + + +Instead of selecting a column, set a fixed Harness FME traffic type for the entire Metric Source (e.g., always `account`). + +This is recommended if the data set only represents one population type. + + + + +### Configure events + +Metric Sources allow flexibility in how event types are set up. + + + + +Select an event type column (e.g., `EVENT_NAME`) so the metric source can be reused across multiple metric definitions. + +This is recommended for general-purpose event sources. + + + + +Instead of selecting a column, set a fixed event name for the entire metric source (e.g., always `product_page_view`). + +This is recommended if the source is meant to be tightly scoped to a single event. + + + + +### Additional configuration options + +* **Preview data**: Harness shows a preview of the data returned from your table so you can validate that the expected rows and columns are present. +* **Owners**: Assign one or more owners to make clear who is responsible for maintaining the Metric Source. +* **Tags**: Add tags (e.g., by team, environment, or use case) to make sources easier to discover and organize. + +## Manage metric sources + +In order to maintaining general-purpose reusable sources while also creating custom sources for sensitive or high-volume use cases, you can adopt a hybrid approach: + +* Reusable, standardized sources are recommended if you want one source to power many metric definitions (e.g., a general events table with filtering by event type). +* Custom sources are useful if you want to tightly scope data for privacy, relevancy, or performance. + +Once you've set up the metric sources that best fit your workflow, you can manage them directly in Harness FME. + +- **Edit**: Update the query, mappings, or field configuration to align with schema changes in your warehouse. Changes may disrupt metrics or experiments relying on this source. +- **Delete**: Remove unused or invalid sources to prevent accidental use. Before deletion, confirm no metric definitions depend on the source. + +## Troubleshooting + +If you encounter issues when configuring a Metric Source: + +
+Test Connection or Run Query Fails + +1. Ensure the table or query is valid and accessible with your warehouse connection credentials. +1. Verify that schemas and table names are spelled correctly. +
+ +
+No Data Appears in Preview + +1. Confirm the query/table returns rows for the event(s) you expect. +1. If you are using event filtering in SQL, test the query directly in your warehouse. +
+ +
+Missing Columns + +Verify that the required fields exist and are returned by your query. +
+ +
+Timestamp Format Issues + +Ensure event timestamps are in a supported `TIMESTAMP` or `DATETIME` format. + +If you are using epoch values (e.g., `EVENT_TIMESTAMP_MS`), convert them in your SQL query. +
+ +
+Incorrect Environment/Traffic Type Mapping + +1. Check that each warehouse value is mapped to the intended Harness environment or traffic type. +1. Use hardcoded values if everything should map to a single option. +
+ +
+Unable to Delete Source + +Check which metric definitions are currently using it. Delete or reassign those metrics before removing the source. +
\ No newline at end of file diff --git a/docs/feature-management-experimentation/warehouse-native/setup/experiments.md b/docs/feature-management-experimentation/warehouse-native/setup/experiments.md new file mode 100644 index 00000000000..172048e4e5b --- /dev/null +++ b/docs/feature-management-experimentation/warehouse-native/setup/experiments.md @@ -0,0 +1,59 @@ +--- +title: Create an Experiment +sidebar_label: Create an Experiment +description: Learn how to create a cloud experiment in Harness FME. +sidebar_position: 6 +--- + + + +## Overview + +### Interactive guide + +This interactive guide will walk you through setting up an experiment for the first time. + + + +### Step-by-step guide + +Setting up an experiment follows these steps: + +1. Navigate to the Experiments section on your navigation panel and click **+ Create experiment**. + +1. Give your experiment a name and select a feature flag and environment in the **Assignment Source** section: + + * Choose a feature flag that has targeting active (not killed). + * Choose an environment for which the feature flag definition is initiated (valid environments are enabled in the dropdown). + +1. Optionally, click **Show advanced** and define an entry event filter in the **Filter by qualifying event** section. + + ![](../static/experiment-entry-filter.png) + + :::info + The entry event filter can only be defined during experiment creation. To make changes, create a new experiment. + ::: + + * Select a filter (e.g. `Has done the following event prior to the metric event`) and a qualifying event from the dropdown menus. + * Only users who trigger this event are counted as exposures. + * The filter applies globally to all metrics; if a [metric already has its own filter](/docs/feature-management-experimentation/experimentation/metrics/setup/filtering/#applying-a-filter), both must be satisfied. + +1. Define the scope of your experiment by setting a start and end time, a baseline treatment, comparison treatments, and a targeting rule. + + * Choose a start date on or after the date the feature flag was created. + * The targeting rule can be any rule with percentage distribution (other rules are disabled in the dropdown). The `default rule` listed in the Targeting rule dropdown is the last rule in the Targeting rules section of a feature flag definition. + + :::note + Based on your feature flag definition, the following fields are pre-populated by default: the start time is the timestamp of the flag’s current version, the end time is determined by your default review period, the baseline treatment is the flag’s default treatment, and the comparison treatments are all other treatments defined by the flag. + ::: + +1. Write an optional hypothesis, add any additional owners, and apply tags to help categorize your experiment (for example, by team, status, or feature area). Then click **Create**. + +1. Add key and supporting metrics to your experiment. Guardrail metrics will be measured automatically for every experiment. diff --git a/docs/feature-management-experimentation/warehouse-native/setup/index.md b/docs/feature-management-experimentation/warehouse-native/setup/index.md new file mode 100644 index 00000000000..e2a2f2a5d94 --- /dev/null +++ b/docs/feature-management-experimentation/warehouse-native/setup/index.md @@ -0,0 +1,42 @@ +--- +title: Setup +description: Learn how to set up assignment and metric sources to run FME experiments in your data warehouse using Warehouse Native. +sidebar_label: Setup +sidebar_position: 3 +--- + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + + + +## Overview + +To start using Warehouse Native Experimentation: + +1. [Connect your data warehouse](/docs/feature-management-experimentation/warehouse-native/integrations) to Harness FME. +1. Prepare your [assignment](/docs/feature-management-experimentation/warehouse-native/setup/assignment-sources) and [metric source tables](/docs/feature-management-experimentation/warehouse-native/setup/metric-sources) in your data warehouse. + + * **Assignment source tables** store raw assignment or exposure events in your data warehouse. They capture how users, accounts, or sessions were allocated to specific experiment variants, along with metadata such as experiment ID, treatment, timestamp, and environment. + + Harness FME reads from these tables to determine which users were exposed to which treatments, ensuring accurate linkage between assignment data and downstream metric analysis. + + * **Metric source tables** contain raw event-level data used to compute experiment metrics. Each row typically represents a user, session, or account interaction, such as a page view, purchase, or API call—along with associated properties (for example, value, timestamp, or event context). + + Harness FME queries these tables to retrieve and aggregate events for metric definitions, ensuring that experiment analyses are based on consistent, verifiable data directly from your warehouse. + +1. Configure your [assignment](/docs/feature-management-experimentation/warehouse-native/setup/configure-assignments) and [metric sources](/docs/feature-management-experimentation/warehouse-native/setup/configure-metrics) in Harness FME. + + * An **assignment source** defines how Harness FME should read impression/exposure events from your data warehouse and map them to experiments. It ensures that users are correctly assigned to treatments, environments, and traffic types, enabling accurate metric analysis across experiments. + * A **metric source** defines how Harness FME reads and interprets raw event data from your warehouse. It ensures that metric events are correctly captured, timestamped, scoped to environments and traffic types, and made available for metric definitions. + +1. Define your [metrics](/docs/feature-management-experimentation/warehouse-native/setup/metrics/) and [create experiments](/docs/feature-management-experimentation/warehouse-native/setup/experiments) in Harness FME. + +Once you've created metric definitions and started running experiments in your data warehouse, you can access [analyses in Harness FME](/docs/feature-management-experimentation/warehouse-native/experiment-results/). \ No newline at end of file diff --git a/docs/feature-management-experimentation/warehouse-native/setup/metric-sources.md b/docs/feature-management-experimentation/warehouse-native/setup/metric-sources.md new file mode 100644 index 00000000000..44dc7fcf669 --- /dev/null +++ b/docs/feature-management-experimentation/warehouse-native/setup/metric-sources.md @@ -0,0 +1,90 @@ +--- +title: Preparing Metric Source Tables for Warehouse Native Experimentation +description: Learn how to prepare your metric source tables in your data warehouse for Warehouse Native Experimentation. +sidebar_label: Prepare Metric Source Tables +sidebar_position: 2 +--- + + + +## Overview + +To prepare Metric Sources for Warehouse Native Experimentation, transform your raw event logs into a clean, standardized table that serves as the foundation for calculating metrics. + +This page describes the required fields, recommended fields, and best practices for preparing your metric source tables. + +## Required columns + +Every Metric Source table must include the following columns: + +| **Column** | **Type** | **Description** | +| ------------------- | -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | +| **Unique Key** | `STRING` | Unique identifier for the unit of randomization (e.g., `user_id`, `account_id`, or custom key). Must align with the key in the Assignment Source. | +| **Event Timestamp** | `DATETIME` / `TIMESTAMP` | The precise time when the event occurred. | +| **Event Name** | `STRING` | The type of event (e.g., `purchase`, `page_view`, `add_to_cart`). | + +:::info +These fields are mandatory. Without them, Warehouse Native cannot define and calculate metrics. +::: + +## Recommended columns + +While not required, these fields make debugging, filtering, and governance more efficient. + +| **Column** | **Type** | **Description** | +| -------------------------- | ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **Event Value** | `FLOAT` / `INTEGER` | For metrics like revenue or page load time. Examples: order amount in USD, page load time in seconds. Required for aggregation in average and sum metrics. | +| **Properties (flattened)** | `STRING`, `BOOLEAN`, `NUMERIC` | Useful for filtering metrics (e.g., `country`, `device_type`, `plan_tier`). | +| **Environment ID** | `STRING` | Separate prod/staging data when the same event schema is used. When configuring a Metric Source in FME, you can map column values to a Harness environment or hard-code a single environment. Metrics automatically filter by the experiment’s environment. | +| **Traffic Type** | `STRING` | Distinguishes the unit type (e.g., `user`, `account`, `anonymous`). Align with Assignment Sources when experiments randomize on different units. | + +## Common raw table schemas + +### Web/App Analytics Event Logs + +| **Example Raw Schema** | **Transformations** | +| --------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `user_id`
`event_name`
`event_time`
`properties (JSON)` | • Flatten properties into columns for key attributes used in metrics (e.g., `properties.amount` → `event_value`, `properties.tier` → `plan_type`).
• Standardize `event_time` → `event_timestamp`.
• Ensure event names are consistent (`purchase` vs `checkout_completed`). | + +### E-commerce Transaction Logs + +| **Example Raw Schema** | **Transformations** | +| --------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `user_id`
`order_id`
`order_time`
`order_amount`
`order_status` | • Map `order_time` → `event_timestamp`.
• Set `event_name = 'purchase'`.
• Use `order_amount` as `event_value`.
• Filter only completed/valid orders (`order_status = 'completed'`). | + +### Custom Business Event Tables + +| **Example Raw Schema** | **Transformations** | +| --------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `account_id`
`metric_type`
`metric_value`
`created_at` | • Map `account_id` → `key`.
• Map `metric_type` → `event_name`.
• Standardize `metric_value` → `event_value`. | + +## Prepare your metric table + +- **Consistency with Assignment Source**: Use the same key (`user_id`, `account_id`) in both tables. This is critical for joining exposures to outcomes. +- **De-duplication**: Remove duplicate event logs, which is common in streaming pipelines. Define uniqueness as (`user_id`, `event_name`, `event_timestamp`, `value`) when possible. +- **Timestamps in UTC**: Always store `event_timestamp` in UTC. +- **Flatten Properties Early**: JSON blobs are flexible but slow in downstream queries. Extract only the fields needed for metrics (e.g., `amount`, `plan_tier`, `country`). +- **Event Naming Conventions**: Standardize event names across products and teams. Avoid mixing singular and plural (for example: `purchase` vs `purchases`). +- **Partitioning and Indexing**: Partition large tables by `DATE(event_timestamp)`. Cluster or index by `user_id` or `event_name` for efficient joins with Assignment Sources. +- **Value Handling**: Handle nulls carefully—exclude them or treat them as `0`, depending on the metric's intent. Ensure numeric fields (for example, `event_value`) use consistent units (for example, always USD, not mixed currencies). + +## Example prepared table schema + +| **Column** | **Type** | **Notes** | +| ------------------ | -------------------------- | ----------------------------------------- | +| `user_id` | `STRING` | Required | +| `event_timestamp` | `TIMESTAMP` | Required | +| `event_name` | `STRING` | Required | +| `event_value` | `FLOAT` | Required for average and sum metric types | +| `any_custom_field` | `STRING` / `BOOLEAN` / `NUMERIC` | Optional (flattened from properties) | +| `environment_id` | `STRING` | Recommended | +| `traffic_type` | `STRING` | Optional | + +Once your Metric Source tables are prepared and validated, see [Setting Up an Metric Source](/docs/feature-management-experimentation/warehouse-native/setup/) to connect them in Harness FME. \ No newline at end of file diff --git a/docs/feature-management-experimentation/warehouse-native/setup/metrics.md b/docs/feature-management-experimentation/warehouse-native/setup/metrics.md new file mode 100644 index 00000000000..0098a5d66d6 --- /dev/null +++ b/docs/feature-management-experimentation/warehouse-native/setup/metrics.md @@ -0,0 +1,104 @@ +--- +title: Create a Metric for Warehouse Native Experimentation in Harness FME +sidebar_label: Create a Metric Definition +description: Learn how to create a metric, select a metric source, and set calculation logic for Warehouse Native Experimentation.. +sidebar_position: 5 +--- + + + +## Overview + +This page explains how to create, manage, and troubleshoot metric definitions for Warehouse Native Experimentation in Harness FME. A metric specifies how success is measured in an experiment by associating a metric source to business logic (i.e. an event type, aggregation method, or impact direction) so your team can consistently measure outcomes. + +## Setup + +To create a metric in Harness FME, navigate to **Metrics** in the navigation menu and click **Create metric**. + +Then, configure the following components in the side panel: + +* **General Details**: Set the metric name, owners, description, and category. +* **Metric Source and Events**: Choose where the data comes from and which events are included. +* **Calculation Logic**: Define how the metric is measured and what type of change indicates success. + +Once configured, metric definitions form a trusted, reusable library of success measures. Analysts and experiment owners can apply them across experiments to ensure consistent, aligned reporting on what success means to your business. + +### General details + +1. Enter a clear, descriptive name such as `Count of Pricing Page Views` or `Average Page Load Time` in the **Name** field. +1. Assign one or more stakeholders responsible for the metric in the **Owners** dropdown menu. Owners help maintain accountability, serve as points of contact for other teams, and are notified if a metric alert policy is fired. +1. Optionally, add tags to help organize metrics by team, product area, or business goal in the **Tags** field. +1. Optionally, add an explanation of what the metric measures in the **Description** field. For example, `This metric measures the average cart value of users who complete a purchase`. +1. Assign a category in the **Metric category** dropdown menu for easier organization. You can also mark a metric as a **Guardrail Metric**. + + :::info + [Guardrail metrics](/docs/feature-management-experimentation/experimentation/metrics/categories/) are automatically included in all experiments and represent critical system or user health indicators that you always want to monitor, such as latency, error rates, churn, or customer satisfaction. These help ensure that while you're optimizing for your primary success metric, you aren't unintentionally degrading key performance or user experience. + ::: + +### Metric source and events + +1. Select a **Metric source** that contains the event data to measure. This determines which dataset the calculation uses. +1. Choose the level of aggregation (such as `user`, `account`, or `session`) in the **Select traffic type** dropdown menu to ensure the metric attributes events correctly. +1. Select one or more events from the metric source that define the behavior being measured (for example, `purchase`, `page_view`, or `sign_up`) in the **Events** dropdown menu. + +### Calculation logic + +1. Specify whether the metric should increase (e.g. conversion rate, revenue) or decrease (e.g. latency, error rate) to indicate success in the **Select desired impact** dropdown menu. +1. Choose how the metric is calculated in the **Measure as** dropdown menu. Harness FME supports multiple aggregation types: + + | Measure As | Description | Example Use Cases | + | --------------------------------------------------- | ------------------------------------------------------------------------------------ | --------------------------------------------------- | + | **Count of events per user** | Counts how many times each user triggered the event. | `# of searches per user`, `# of purchases per user` | + | **Average of event values per user** | Averages a numeric property from events per user. | `Average purchase amount`, `Average session time` | + | **Sum of event values per user** | Sums a numeric property from events per user. | `Total spend per user`, `Total minutes streamed` | + +## Manage metrics + +Once created, metrics become shared, reusable definitions that teams can reference across experiments. Managing metrics effectively ensures consistent measurement and a reliable source of truth for your organization. + +* **Edit**: Update a metric's definition, owners, tags, or calculation logic if business rules change. + + :::warning Note + Editing a metric updates it everywhere it's used. Review dependencies carefully before making changes. + ::: + +* **Delete**: Remove outdated or unused metrics to keep your library organized. Only delete metrics that are no longer relevant or used in active experiments. + +* **Discoverability**: Use tags, owners, and categories to make metrics easy to find, filter, and trust. Clear ownership and tagging help teams locate reliable definitions quickly. + +## Troubleshooting + +If a metric doesn't appear or behaves unexpectedly: + +
+Metric Not Appearing in Experiment Setup + +Ensure you’ve clicked **Create** and that the metric includes a valid source and event mapping. + +
+ +
+Incorrect Aggregation + +Double-check your **Measure As** setting (for example, `Count`, `Average`, `Sum`, or `Percentage`). + +
+ +
+Event Mismatch + +Verify that the selected event name exists in the chosen **Metric Source** and is mapped correctly. +
+ +
+Impact Direction Confusion + +Confirm that **Increase** is used for positive outcomes (like conversion or revenue) and **Decrease** for negative ones (like latency or error rate). +
diff --git a/docs/feature-management-experimentation/warehouse-native/static/data-flow.png b/docs/feature-management-experimentation/warehouse-native/static/data-flow.png new file mode 100644 index 00000000000..68bf594dfe4 Binary files /dev/null and b/docs/feature-management-experimentation/warehouse-native/static/data-flow.png differ diff --git a/docs/feature-management-experimentation/warehouse-native/static/experiment-entry-filter.png b/docs/feature-management-experimentation/warehouse-native/static/experiment-entry-filter.png new file mode 100644 index 00000000000..b40c8cabbff Binary files /dev/null and b/docs/feature-management-experimentation/warehouse-native/static/experiment-entry-filter.png differ diff --git a/docs/feature-management-experimentation/warehouse-native/static/line-chart.png b/docs/feature-management-experimentation/warehouse-native/static/line-chart.png new file mode 100644 index 00000000000..58a981a0311 Binary files /dev/null and b/docs/feature-management-experimentation/warehouse-native/static/line-chart.png differ diff --git a/docs/feature-management-experimentation/warehouse-native/static/share-results.png b/docs/feature-management-experimentation/warehouse-native/static/share-results.png new file mode 100644 index 00000000000..be89b8b6854 Binary files /dev/null and b/docs/feature-management-experimentation/warehouse-native/static/share-results.png differ diff --git a/docs/feature-management-experimentation/warehouse-native/static/summarize.png b/docs/feature-management-experimentation/warehouse-native/static/summarize.png new file mode 100644 index 00000000000..116ddfa259f Binary files /dev/null and b/docs/feature-management-experimentation/warehouse-native/static/summarize.png differ diff --git a/docs/feature-management-experimentation/warehouse-native/static/view-metrics.png b/docs/feature-management-experimentation/warehouse-native/static/view-metrics.png new file mode 100644 index 00000000000..cc2851e262e Binary files /dev/null and b/docs/feature-management-experimentation/warehouse-native/static/view-metrics.png differ diff --git a/docs/feature-management-experimentation/warehouse-native/static/view-results.png b/docs/feature-management-experimentation/warehouse-native/static/view-results.png new file mode 100644 index 00000000000..32d2521a1ad Binary files /dev/null and b/docs/feature-management-experimentation/warehouse-native/static/view-results.png differ diff --git a/release-notes/feature-management-experimentation.md b/release-notes/feature-management-experimentation.md index 4fbdc4bb199..03491c903d5 100644 --- a/release-notes/feature-management-experimentation.md +++ b/release-notes/feature-management-experimentation.md @@ -1,7 +1,7 @@ --- title: Feature Management & Experimentation release notes sidebar_label: Feature Management & Experimentation -date: 2025-10-15T10:00:00 +date: 2025-10-22T10:00:00 tags: ["fme", "feature management experimentation"] sidebar_position: 11 --- @@ -12,10 +12,29 @@ import HarnessApiData from '../src/components/HarnessApiData/index.tsx'; These release notes describe recent changes to Harness Feature Management & Experimentation (FME). -#### Last updated: October 15, 2025 +#### Last updated: October 22, 2025 ## October 2025 +### [New Feature] Warehouse Native Experimentation in Beta +---- +#### 2025-10-22 + +Harness FME now supports **Warehouse Native Experimentation** in beta. Warehouse Native allows you to run experiments directly in your data warehouse using your own assignment and event data. This approach gives you greater flexibility, transparency, and control over experiment analysis, without needing to export or duplicate data outside your analytics environment. + +You can use Warehouse Native Experimentation to: + +- Run analyses on experiments with data already stored in your warehouse. +- Leverage FME's statistical engine and additional measurement techniques for improved accuracy and confidence intervals. +- Integrate with existing assignment and metric tables in your data warehouse. + +To request access for the Warehouse Native Experimentation beta experience, contact [Harness Support](/docs/feature-management-experimentation/fme-support). + +#### Related documentation + +- [Warehouse Native Experimentation](/docs/feature-management-experimentation/warehouse-native) +- [Warehouse Native Setup](/docs/feature-management-experimentation/warehouse-native/setup) + ### [New Feature] Harness Proxy ---- #### 2025-10-15 diff --git a/sidebars.ts b/sidebars.ts index 4aef83be8fa..29806bddfb2 100644 --- a/sidebars.ts +++ b/sidebars.ts @@ -1899,7 +1899,7 @@ const sidebars: SidebarsConfig = { }, { type: 'category', - label: 'Experimentation', + label: 'Cloud Experimentation', link: { type: 'generated-index', slug: 'feature-management-experimentation/experimentation', @@ -1912,6 +1912,19 @@ const sidebars: SidebarsConfig = { }, ], }, + { + type: 'category', + label: 'Warehouse Native Experimentation', + className: 'sidebar-item-beta', + link: { + type: 'doc', + id: 'feature-management-experimentation/warehouse-native/index', + }, + collapsed: true, + items: [ + { type: 'autogenerated', dirName: 'feature-management-experimentation/warehouse-native' }, + ], + }, { type: 'category', label: 'Release Agent', diff --git a/src/components/Docs/data/featureManagementExperimentationData.ts b/src/components/Docs/data/featureManagementExperimentationData.ts index 91e4008a559..c13b88a1436 100644 --- a/src/components/Docs/data/featureManagementExperimentationData.ts +++ b/src/components/Docs/data/featureManagementExperimentationData.ts @@ -62,12 +62,19 @@ import { MODULES } from "@site/src/constants" link: "/docs/feature-management-experimentation/release-monitoring", }, { - title: "Experimentation", + title: "Cloud Experimentation", module: MODULES.fme, description: "Run experiments and analyze results for data-driven development.", link: "/docs/feature-management-experimentation/experimentation", }, + { + title: "Warehouse Native Experimentation", + module: MODULES.fme, + description: + "Run experiments and analyze results for data-driven development in your data warehouse.", + link: "/docs/feature-management-experimentation/warehouse-native", + }, { title: "Release Agent", module: MODULES.fme, diff --git a/src/components/Docs/data/whnIntegrations.js b/src/components/Docs/data/whnIntegrations.js new file mode 100644 index 00000000000..321f9fb9a99 --- /dev/null +++ b/src/components/Docs/data/whnIntegrations.js @@ -0,0 +1,64 @@ +import React from 'react'; + +export const dataWarehouses = [ + { + name: 'Snowflake', + img: '/provider-logos/whn-integrations/snowflake-logo.png', + link: '/docs/feature-management-experimentation/warehouse-native/integrations/snowflake', + }, + { + name: 'Amazon Redshift', + img: '/provider-logos/whn-integrations/redshift-logo.png', + link: '/docs/feature-management-experimentation/warehouse-native/integrations/amazon-redshift', + }, +]; + +// Helper to chunk items into rows of 4 +function chunkArray(array, size) { + return array.reduce((acc, _, i) => + (i % size ? acc : [...acc, array.slice(i, i + size)]), []); +} + +// Component to render SDK grid +export function Section({ title, items }) { + const rows = chunkArray(items, 4); + return ( + <> +

{title}

+ {rows.map((row, idx) => ( +
+ {row.map(({ name, img, link }) => ( + + ))} +
+ ))} + + ); +} + +export default function dataWarehousesGrid() { + return ( +
+
+
+ ); +} diff --git a/src/components/ToolTip/tooltips.json b/src/components/ToolTip/tooltips.json index 68bdbb7ac08..72664c87e4d 100644 --- a/src/components/ToolTip/tooltips.json +++ b/src/components/ToolTip/tooltips.json @@ -11,5 +11,16 @@ "tf-commands": { "init": "OpenTofu/Terraform command used to initialize a configuration. It downloads and configures providers, modules, and other dependencies." } + }, + "fme": { + "warehouse-native": { + "metric": "A quantifiable measure used to track and assess the performance of a specific aspect of an experiment. In Warehouse Native, metrics are defined in Harness FME.", + "experiment": "A controlled test to evaluate the impact of different variations on user behavior or system performance. In Warehouse Native, experiments are defined in Harness FME.", + "assignment-source": "A data source that defines how users are assigned to different variations in an experiment. In Warehouse Native, this data is typically stored in your data warehouse.", + "metric-source": "A data source that defines how metrics are collected and calculated for an experiment. In Warehouse Native, this data is typically stored in your data warehouse.", + "data-warehouse": "A centralized repository for storing and managing large volumes of structured and semi-structured data. Examples include Snowflake, BigQuery, Redshift, and Databricks.", + "warehouse-native": "A method of running feature management experiments directly within your data warehouse, leveraging its processing power and existing data infrastructure.", + "cloud-experimentation": "A method of running feature management experiments using a cloud-based service provided by Harness FME, which handles data collection, analysis, and reporting." + } } } \ No newline at end of file diff --git a/src/css/custom.css b/src/css/custom.css index d8075e8deef..52c2fb2f091 100644 --- a/src/css/custom.css +++ b/src/css/custom.css @@ -1889,3 +1889,42 @@ html[data-theme='dark'] .sidebar-opensource > a::before { background: linear-gradient(135deg, #3dc7f6, #00ade4); box-shadow: 0 2px 4px rgba(61, 199, 246, 0.4); } + +.sidebar-item-beta > .menu__list-item-collapsible > .menu__link { + position: relative; + display: flex !important; + align-items: center !important; + justify-content: space-between !important; +} + +.sidebar-item-beta > .menu__list-item-collapsible > .menu__link::after { + content: 'BETA'; + background: linear-gradient(135deg, #5a00cc, #ff00cc); + color: white; + font-size: 8px; + font-weight: 700; + padding: 2px 6px; + border-radius: 10px; + text-transform: uppercase; + letter-spacing: 0.5px; + margin-left: auto; + box-shadow: 0 2px 6px rgba(161, 0, 255, 0.4); + animation: betaBadgePulse 3s ease-in-out infinite; + flex-shrink: 0; +} + +@keyframes betaBadgePulse { + 0%, 100% { + opacity: 1; + transform: scale(1); + } + 50% { + opacity: 0.85; + transform: scale(0.97); + } +} + +[data-theme='dark'] .sidebar-item-beta > .menu__list-item-collapsible > .menu__link::after { + background: linear-gradient(135deg, #c458ff, #ff66e8); + box-shadow: 0 2px 6px rgba(255, 102, 232, 0.5); +} \ No newline at end of file diff --git a/static/provider-logos/whn-integrations/bigquery-logo.svg b/static/provider-logos/whn-integrations/bigquery-logo.svg new file mode 100644 index 00000000000..4ee5458647f --- /dev/null +++ b/static/provider-logos/whn-integrations/bigquery-logo.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/static/provider-logos/whn-integrations/redshift-logo.png b/static/provider-logos/whn-integrations/redshift-logo.png new file mode 100644 index 00000000000..c3b8476b32f Binary files /dev/null and b/static/provider-logos/whn-integrations/redshift-logo.png differ diff --git a/static/provider-logos/whn-integrations/snowflake-logo.png b/static/provider-logos/whn-integrations/snowflake-logo.png new file mode 100644 index 00000000000..81bf2d23880 Binary files /dev/null and b/static/provider-logos/whn-integrations/snowflake-logo.png differ diff --git a/static/provider-logos/whn-integrations/trino-logo.png b/static/provider-logos/whn-integrations/trino-logo.png new file mode 100644 index 00000000000..f1612909c1b Binary files /dev/null and b/static/provider-logos/whn-integrations/trino-logo.png differ