Add Flow Action: Remove Duplicate Objects by Key #240
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This contribution adds a reusable Flow Action that removes duplicate objects from a JSON array based on a specified key. It is designed to simplify data cleanup and normalization in Flows and Subflows.
How It Works
The Flow Action accepts a JSON array of objects and removes duplicates according to a field specified by the user.
You can choose whether to keep the first or last occurrence, and optionally make the comparison case-insensitive and trim whitespace before evaluating duplicates.
Inputs
array: JSON string representing an array of objects to process.
Example:
key: Object field name used to detect duplicates. Example: email.
keep: Determines whether to keep the first or last occurrence of duplicated entries.
case_insensitive: When true, treats field values as case-insensitive ([email protected] equals [email protected]).
trim: When true, trims whitespace before comparing values (e.g., " [email protected] " equals "[email protected]").
Outputs
JSON object containing the cleaned list and metadata:
Example Use Case
Use this Action to remove duplicate user records by email before performing bulk insert or update operations via Flow Designer.
This helps ensure data integrity, especially when processing large datasets or integrating external APIs.