You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This guide shows how to use *NextGen UI Agent* in your application.
3
+
This guide shows how to use *NextGen UI Agent* in your AI application, how it plays with other building blocks of it.
4
+
5
+
## What is *UI Agent*
4
6
5
7
In short, *UI Agent* takes `User Prompt` and [`Structured Data`](input_data/index.md) relevant to this prompt as an input,
6
8
and generates UI component to visualize that piece of data to the user. We call it [`Data UI Block`](data_ui_blocks/index.md).
7
-
[AI (LLM) is used](llm.md) in this step to understand the `User Prompt` and [input data structure](./input_data/structure.md),
8
-
and select the best dynamic UI component and displayed data values.
9
9
10
-
Stricter configuration can be used when tighter control of UI components applied to the input data is necessary, for
10
+
UI Agent uses [AI (LLM)](llm.md) in this step to understand the `User Prompt` and [input data structure](./input_data/structure.md),
11
+
and select the best dynamic UI component and displayed data values.
12
+
As UI component generation is an AI narrow task, small LLMs (3B, 8B, Mini/Flash/Flash-Lite series) are typically able to provide
13
+
good results for this task, which saves LLM price. They also provide better processing time, which is important for good user experience.
14
+
We provide [LLM evaluation tool](https://github.com/RedHat-UX/next-gen-ui-agent/tree/main/tests/ai_eval_components) as part of this project,
15
+
results from some [eval runs are available](https://github.com/RedHat-UX/next-gen-ui-agent/tree/main/tests/ai_eval_components/results).
16
+
We expect to provide small LLMs finetuned for UI generation task in the future.
17
+
18
+
*UI Agent* can process structured Input Data in different formats. Extensible [input data transformation framework](./input_data/transformation.md)
19
+
is used, with OOTB transformers provided for the most common formats like `JSON`, `YAML`, `CSV` and `fixed width columns table`.
20
+
21
+
Stricter configuration can be defined when tighter control of the UI components applied to the input data is necessary, for
11
22
details see [Component Selection and Configuration docs](./data_ui_blocks/index.md#selection-and-configuration-process).
12
23
13
24
In the future, this agent will also maintain `UI state` and view layouts to keep UI and flows consistent, handle personalized
@@ -16,11 +27,12 @@ values formating, and many other features. Stay tuned ;-)
16
27
Example of the generated `Data UI Block`:
17
28

18
29
19
-
*UI Agent* also suports [*Hand Build Components*](data_ui_blocks/hand_build_components.md) for pieces of data where UI component exists already, or where
20
-
it is needed to provide special visualization or use features on top of AI generated UI components.
30
+
*UI Agent* also suports [*Hand Build Components*](data_ui_blocks/hand_build_components.md) for pieces of data where UI component exists
31
+
already, or where it is needed to provide special visualization or use features on top of AI generated UI components.
21
32
33
+
## How to use *UI Agent*
22
34
23
-
Your application, called *Controlling assistant*, has to provide other building blocks and their orchestration to implement complete solution.
35
+
Your AI application, called *Controlling assistant*, has to provide other building blocks and their orchestration to implement complete solution.
24
36
25
37
Example of the *Controlling assistant* architecture:
26
38

@@ -29,7 +41,7 @@ Example of the *Controlling assistant* architecture:
29
41
It can do it directly, for example using `LLM Tools Calling`/`MCP`, or it can call *Data providing agent* in case
30
42
of Multi-Agent architecture. It can even generate that data itself in process of Reasoning or user's intent detection and processing.
31
43
*Controlling assistant* can load more pieces of data for one conversation turn, and send them all to the *UI Agent* to generate
32
-
more `AI UI Blocks` to be shown to the user in the assistant's GUI.
44
+
more `Data UI Blocks` to be shown to the user in the assistant's GUI.
33
45
34
46
*Controlling assistant* can also generate *Natural language response* based on this data and deliver it to the user through GUI or Voice user interface.
35
47
To follow vision of the *NextGen UI*, this natural language response should not repeat visualized data, but rather provide
@@ -43,6 +55,20 @@ Example mockup of the *Controlling assistant* GUI:
43
55
They can be rendered using pluggable GUI component system renderers, and integrated into the GUI of the *Controlling assistant*.
44
56
We provide renderers for several UI component systems, either Server-Side or Client-Side, see [Binding into UI](renderer/index.md).
45
57
46
-
*UI Agent* can be integrated into *Controlling Assistant* developed using multiple AI frameworks or AI protocols, see [Binding into AI application](ai_apps_binding/index.md).
58
+
Output of the *UI Agent* does not contain the `Data UI Block` rendering only, but also [structured UI component configuration](../spec/output.md).
59
+
It can be used to implement advanced UI features, like live data updates from backend, manual selection of visualized table columns etc.
60
+
61
+
## How to integrate *UI Agent*
62
+
63
+
*UI Agent* can be integrated into *Controlling Assistant* developed using multiple AI frameworks or AI protocols,
64
+
see [Binding into AI application](ai_apps_binding/index.md). You can also refer ["Choose your framework"](../installation.md) guide.
65
+
66
+
The first approach how to integrate *UI Agent* into the *Controlling assistant* is to use assistant's LLM to choose and execute the
67
+
UI Agent. This approach makes sense if you want your assistant to act like `Orchestrator` - to decide about the UI component generation.
68
+
For example to select which backend data loaded during the processing needs to be visualized in UI, or whether UI has to be generated at all.
69
+
This approach cost you more in terms of the main LLM processing price (tokens) and time, but gives you more flexibility.
47
70
48
-
You can also refer ["Choose your framework"](../installation.md).
71
+
Alternative approach is to invoke *UI Agent* directly as part of your assistant logic, at the specific moment of the processing flow,
72
+
after gathering structured backend data for the response.
73
+
This approach is a bit more reliable, helps to reduce main LLM processing price (tokens) and time (you can even generate UI in
74
+
parallel with *Natural language response* generation), but is a bit less flexible.
Strategy for LLM powered component selection and configuration step:
26
28
27
29
-`one_llm_call`: Uses single LLM call for component selection and configuration - default
28
30
-`two_llm_calls`: Uses two LLM calls - first selects component type, second configures it - *experimental feature!*
29
31
32
+
30
33
### `input_data_json_wrapping`[`bool`, optional]
31
34
32
35
Whether to perform [automatic `InputData` JSON wrapping](input_data/structure.md#automatic-json-wrapping) if JSON structure is not good for LLM processing (default: `True`)
33
36
37
+
38
+
### `generate_all_fields`[`bool`, optional]
39
+
40
+
If `True`, the agent will generate all possible view Fields for the UI component into its output configuration `UIBlockComponentMetadata.fields_all`.
41
+
It can be used in UI component to give user a chance to manually select/update which fields are shown.
42
+
If `False` then all fields aren't generated. Can be overriden for individual `data_types`. (default: `False`)
Configurations for [`InputData.type`s](input_data/index.md#inputdata-object-fields), like:
@@ -44,6 +55,13 @@ Key is `InputData.type` to configure, value is configuration object for that dat
44
55
45
56
Optional name of the [Input Data Transformer](input_data/transformation.md) to be used for this data type instead of [Agent's default one](#data_transformer-str-optional).
46
57
58
+
#### `generate_all_fields`[`bool`, optional]
59
+
60
+
If `True`, the agent will generate all possible view Fields for the UI component into its output configuration `UIBlockComponentMetadata.fields_all`.
61
+
It can be used in UI component to give user a chance to manually select/update which fields are shown.
62
+
If `False` then all fields aren't generated, if not defined then [agent's default setting](#generate_all_fields-bool-optional) is used.
63
+
All fields are supported only for `table` and `set-of-cards` components.
Copy file name to clipboardExpand all lines: docs/guide/data_ui_blocks/dynamic_components.md
+6Lines changed: 6 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -73,6 +73,9 @@ Value can be simple text or number etc. List (array) of values is supported as w
73
73
74
74
Layout for this set of cards has to be provided by frontend application.
75
75
76
+
If enabled in the *UI Agent* configuration, [agent's output can contain list of all fields available in the input data](../../spec/output.md), so you can
77
+
provide user with the ability to manually select what is shown after this UI component is generated.
@@ -85,3 +88,6 @@ Table is UI block that displays:
85
88
* Table with AI selected Columns with AI generated names, and rows of values gathered from agent's input data.
86
89
87
90
Individual cell value can be simple text or number etc. List (array) of values is supported as well.
91
+
92
+
If enabled in the *UI Agent* configuration, [agent's output can contain list of all fields available in the input data](../../spec/output.md), so you can
93
+
provide user with the ability to manually select which columns are shown after this UI component is generated.
It should be returned from AI framework/protocol binding so *Controlling Assistant* can send it back later when it needs to refresh component for the new data.
0 commit comments