You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: fern/pages/06-integrations/activepieces.mdx
+2-17Lines changed: 2 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ hide-nav-links: true
6
6
7
7
[Activepieces](https://www.activepieces.com/) is an open-source, no-code automation platform that enables users to streamline workflows by connecting various applications and automating tasks.
8
8
9
-
With the AssemblyAI piece for Activepieces, you can use AssemblyAI to transcribe audio data with speech recognition models, analyze the data with audio intelligence models, and build generative features on top of it with LLMs.
9
+
With the AssemblyAI piece for Activepieces, you can use AssemblyAI to transcribe audio data with speech recognition models, analyze the data with speech understanding models, and build generative features on top of it with LLMs.
10
10
You can supply audio to the AssemblyAI piece and connect the output of any of AssemblyAI's models to other services in your Activepieces flow.
11
11
12
12
## Quickstart
@@ -60,7 +60,7 @@ If you don't have a publicly accessible URL, you can use the Upload a File modul
60
60
If you don't want to wait until the transcript is ready, uncheck the `Wait until transcript is ready` parameter.
61
61
62
62
<Info>
63
-
Configure your desired [Audio Intelligence models](/audio-intelligence) when
63
+
Configure your desired [Speech Understanding models](/speech-understanding) when
64
64
you create the transcript. The results of the models will be included in the
65
65
transcript output.
66
66
</Info>
@@ -119,21 +119,6 @@ Deleting a transcript doesn't delete the transcript resource itself, but removes
119
119
"error".
120
120
</Note>
121
121
122
-
### LeMUR
123
-
124
-
#### Run a Task using LeMUR
125
-
126
-
Prompt different LLMs over your audio data using LeMUR.
127
-
You have to configure either the `Transcript IDs` or `Input Text` input field.
128
-
129
-
#### Retrieve LeMUR response
130
-
131
-
Retrieve a LeMUR response that was previously generated.
132
-
133
-
#### Purge LeMUR request data
134
-
135
-
Delete the data for a previously submitted LeMUR request.
136
-
Response data from the LLM, as well as any context provided in the original request will be removed.
Copy file name to clipboardExpand all lines: fern/pages/06-integrations/make.mdx
+2-13Lines changed: 2 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ hide-nav-links: true
6
6
7
7
[Make](https://make.com/) (formerly Integromat) is a workflow automation tool that lets you integrate various services together without requiring coding knowledge.
8
8
9
-
With the AssemblyAI app for Make, you can use our AI models to process audio data by transcribing it with speech recognition models, analyzing it with audio intelligence models, and building generative features on top of it with LLMs.
9
+
With the AssemblyAI app for Make, you can use our AI models to process audio data by transcribing it with speech recognition models, analyzing it with speech understanding models, and building generative features on top of it with LLMs.
10
10
You can supply audio to the AssemblyAI app and connect the output of our models to other services in your Make scenarios.
11
11
12
12
## Quickstart
@@ -70,7 +70,7 @@ If you don't have a publicly accessible URL, you can use the [Upload a File](#up
70
70
If you don't want to wait until the transcript is ready, change the `Wait until Transcript is Ready` parameter to `No` under **Show advanced settings**.
71
71
72
72
<Info>
73
-
Configure your desired [Audio Intelligence models](/audio-intelligence) when
73
+
Configure your desired [Speech Understanding models](/speech-understanding) when
74
74
you create the transcript. The results of the models will be included in the
75
75
transcript output.
76
76
</Info>
@@ -140,17 +140,6 @@ Deleting a transcript does not delete the transcript resource itself, but remove
140
140
"error".
141
141
</Note>
142
142
143
-
### LeMUR
144
-
145
-
#### Run a Task using LeMUR
146
-
147
-
Prompt different LLMs over your audio data using LeMUR.
148
-
You have to configure either the `Transcript IDs` or `Input Text` input field.
149
-
150
-
#### Purge a LeMUR Request
151
-
152
-
Delete the data for a previously submitted LeMUR request.
153
-
Response data from the LLM, as well as any context provided in the original request will be removed.
Copy file name to clipboardExpand all lines: fern/pages/06-integrations/power-automate.mdx
+3-17Lines changed: 3 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ hide-nav-links: true
7
7
[Microsoft Power Automate](https://www.microsoft.com/en-us/power-platform/products/power-automate) is a low-code workflow automation platform with a rich collection of connectors to Microsoft's first-party services and third-party services. [Azure Logic Apps](https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-overview) is the equivalent service built for developers and IT pros.
8
8
9
9
The AssemblyAI connector makes our API available to both Microsoft Power Automate and Azure Logic Apps.
10
-
With the connector, you can use AssemblyAI to transcribe audio data with speech recognition models, analyze the data with audio intelligence models, and build generative features on top of it with LLMs.
10
+
With the connector, you can use AssemblyAI to transcribe audio data with speech recognition models, analyze the data with speech understanding models, and build generative features on top of it with LLMs.
11
11
You can supply audio to the AssemblyAI connector and connect the output of our models to other services in your flows.
12
12
13
13
## Quickstart
@@ -49,7 +49,7 @@ Once you transcribe the file, the file will be removed from AssemblyAI's servers
49
49
## Transcribe Audio
50
50
51
51
To transcribe your audio, configure the `Audio URL` parameter using your audio file URL.
52
-
Then, configure the additional parameters to enable more [Speech Recognition](https://www.assemblyai.com/docs/speech-to-text/pre-recorded-audio) features and [Audio Intelligence](https://www.assemblyai.com/docs/audio-intelligence) models.
52
+
Then, configure the additional parameters to enable more [Speech Recognition](https://www.assemblyai.com/docs/speech-to-text/pre-recorded-audio) features and [Speech Understanding](https://www.assemblyai.com/docs/speech-understanding) models.
53
53
54
54
The result of the Transcribe Audio action is a queued transcript which will start being processed immediately.
55
55
To get the completed transcript, you have two options:
@@ -216,7 +216,7 @@ The output of this action is a `queued` transcript. Learn [how to wait until the
216
216
</Warning>
217
217
218
218
<Info>
219
-
Configure your desired [Audio Intelligence models](/audio-intelligence) when
219
+
Configure your desired [Speech Understanding models](/speech-understanding) when
220
220
you create the transcript. The results of the models will be included in the
221
221
transcript output when the transcript is completed.
222
222
</Info>
@@ -274,20 +274,6 @@ Delete the transcript. Deleting does not delete the resource itself, but removes
274
274
`error`.
275
275
</Note>
276
276
277
-
### LeMUR
278
-
279
-
#### Run a Task Using LeMUR
280
-
281
-
Use the LeMUR task endpoint to input your own LLM prompt.
282
-
You have to configure either the `Transcript IDs` or `Input Text` input field.
283
-
284
-
#### Retrieve LeMUR Response
285
-
286
-
Retrieve a LeMUR response that was previously generated.
287
-
288
-
#### Purge LeMUR Request Data
289
-
290
-
Delete the data for a previously submitted LeMUR request. The LLM response data, as well as any context provided in the original request will be removed.
LeMUR is a framework by AssemblyAI to process audio files with an LLM.
46
-
The AssemblyAI plugin has a dedicated node for each LeMUR endpoint.
47
-
Each node accepts Transcript IDs or Input Text as input which you can get from the Transcribe Audio node. Additional parameters are available as inputs and as node configuration. For more information what these parameters do, see [LeMUR API reference](https://www.assemblyai.com/docs/api-reference/lemur).
48
-
49
-
#### LeMUR Summary node
50
-
51
-
The LeMUR Summary node uses LeMUR to summarize a given transcript.
Copy file name to clipboardExpand all lines: fern/pages/06-integrations/twilio.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ hide-nav-links: true
6
6
7
7
Twilio is a programmable communication platform for voice, messaging, and email.
8
8
By combining Twilio with AssemblyAI, you can transcribe voice calls in [real-time](/docs/speech-to-text/streaming), and voice recordings and voice messages [asynchronously](/docs/speech-to-text/pre-recorded-audio).
9
-
Combine transcription with our [audio intelligence models](/docs/audio-intelligence/summarization) and [LeMUR LLM framework](/docs/lemur/summarize-audio) to analyze the calls and messages.
9
+
Combine transcription with our [speech understanding models](/docs/speech-understanding/summarization) to analyze the calls and messages.
0 commit comments