Skip to content

The GenAI API wrapper for Delphi seamlessly integrates OpenAI’s latest models (o4, gpt-4.1 and gpt-5), delivering robust support for agent chats/responses, text generation, vision, audio analysis, JSON configuration, web search, asynchronous operations, and video (SORA-2, SORA-2-pro). Image generation with gpt-image-1.

License

Notifications You must be signed in to change notification settings

MaxiDonkey/DelphiGenAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Delphi GenAI - Optimized OpenAI Integration


Delphi async/await supported GitHub GetIt – Available GitHub GitHub


NEW:





Introduction

Built with Delphi 12 Community Edition (v12.1 Patch 1)
The wrapper itself is MIT-licensed.
You can compile and test it free of charge with Delphi CE; any recent commercial Delphi edition works as well.

DelphiGenAI is a full OpenAI wrapper for Delphi, covering the entire platform: text, vision, audio, image generation, video (Sora-2), embeddings, conversations, containers, and the latest v1/responses agentic workflows. It offers a unified interface with sync/async/await support across major Delphi platforms, making it easy to leverage modern multimodal and tool-based AI capabilities in Delphi applications.


Important

This is an unofficial library. OpenAI does not provide any official library for Delphi. This repository contains Delphi implementation over OpenAI public API.



Documentation Overview

Comprehensive Project Documentation Reference



Tips for using the tutorial effectively

Obtain an API Key

To initialize the API instance, you need to obtain an API key from OpenAI

Once you have a token, you can initialize IGenAI interface, which is an entry point to the API.

Note

//uses GenAI, GenAI.Types;

//Declare 
//  Client: IGenAI;

 Client := TGenAIFactory.CreateInstance(api_key);

To streamline the use of the API wrapper, the process for declaring units has been simplified. Regardless of which methods you use, you only need to reference the following two core units: GenAI and GenAI.Types.


Tip

To effectively use the examples in this tutorial, particularly when working with asynchronous methods, it is recommended to define the client interfaces with the broadest possible scope. For optimal implementation, these clients should be declared in the application's OnCreate method.


Code examples

The OpenAI API lets you plug advanced models into your applications and production workflows in just a few lines of code. Once billing is enabled on your account, your API keys become active and you can start making requests — including your first call to the chat endpoint within seconds.


Synchronous code example

//uses GenAI, GenAI.Types, GenAI.Tutorial.VCL;

  var API_Key := 'OPENAI_API_KEY';
  var MyClient := TGenAIFactory.CreateInstance(API_KEY);

  var Value := Client.Responses.Create(
    procedure (Params: TResponsesParams)
    begin
      Params
        .Model('gpt-4.1-mini')
        .Input('What is the difference between a mathematician and a physicist?')
        .Store(False);  // Response not stored
    end);
  try
    for var Item in Value.Output do
      for var SubItem in Item.Content do
        Memo1.Text := SubItem.Text;
  finally
    Value.Free;
  end;

Asynchronous code example

//uses GenAI, GenAI.Types, GenAI.Tutorial.VCL;

var MyClient: IGenAI;

procedure TForm1.Test;
begin
  var API_Key := 'OPENAI_API_KEY';
  MyClient := TGenAIFactory.CreateInstance(API_KEY);

  MyClient.Responses.AsynCreate(
    procedure (Params: TResponsesParams)
    begin
      Params
        .Model('gpt-4.1-mini')
        .Input('What is the difference between a mathematician and a physicist?')
        .Store(False);  // Response not stored
    end,
    function : TAsynResponse
    begin
      Result.OnStart :=
        procedure (Sender: TObject)
        begin
          Memo1.Lines.Text := 'Please wait...';
        end;

      Result.OnSuccess :=
        procedure (Sender: TObject; Value: TResponse)
        begin
          for var Item in Value.Output do
            for var SubItem in Item.Content do
              Memo1.Text := SubItem.Text;
        end;

      Result.OnError :=
        procedure (Sender: TObject; Error: string)
        begin
          Memo1.Lines.Text := Error;
        end;
    end);
end;

Strategies for quickly using the code examples

To streamline the implementation of the code examples provided in this tutorial, two support units have been included in the source code: GenAI.Tutorial.VCL and GenAI.Tutorial.FMX Based on the platform selected for testing the provided examples, you will need to initialize either the TVCLTutorialHub or TFMXTutorialHub class within the application's OnCreate event, as illustrated below:

Important

In this repository, you will find in the sample folder two ZIP archives, each containing a template to easily test all the code examples provided in this tutorial. Extract the VCL or FMX version depending on your target platform for testing. Next, add the path to the DelphiGenAI library in your project’s options, then copy and paste the code examples for immediate execution.

These two archives are designed to fully leverage the TutorialHub middleware and enable rapid upskilling with DelphiGenAI.

  • VCL support with TutorialHUB: TestGenAI_VCL.zip

  • FMX support with TutorialHUB: TestGenAI_FMX.zip


This project, built with DelphiGenAI , allows you to consult GenAI documentation and code in order to streamline and accelerate your upskilling.



GenAI functional coverage

Below, the table succinctly summarizes all OpenAI endpoints supported by the GenAI.

End point supported status / notes
/assistants
deprecated
/audio/speech
/audio/transcriptions
/audio/translations
/batches
/chat/completions
/chatkit
/completions
legacy
/containers
new
/conversations
new
/embeddings
/evals
/files
/fine_tuning
/images
/models
/moderations
/organization
/realtime
new
/responses
updated
/threads
deprecated
/uploads
/vector_stores
/videos
new


Quick Start Guide

Responses vs. Chat Completions

The v1/responses API is the new core API, designed as an agentic primitive that combines the simplicity of chat completions with the power of action execution. It natively includes several built‑in tools:

  • Web search
  • File search
  • Computer control
  • Image generation
  • Remote MCP
  • Code interpreter

With these integrated capabilities, you can build more autonomous, agent‑oriented applications that not only generate text but also interact with their environment.

The v1/responses endpoint is intended to gradually replace v1/chat/completions, as it embodies a synthesis of current best practices in AI—especially for those looking to adopt an agentic approach.

To help you get up to speed on both endpoints, the two following documents provide detailed documentation, complete with numerous request examples and use cases:

Note

If you're a new user, we recommend using the Responses API.


Functional differences between the two endpoints

Capabilities Chat Completions API Responses API
Text generation
Audio
Coming soon
Vision
Structured Outputs
Function calling
Web search
File search
Computer use
Code interpreter
Image generation
Remote MCP
Reasoning summaries

Warning

Note from OpenAI
The Chat Completions API is an industry standard for building AI applications, and we intend to continue supporting this API indefinitely. We're introducing the Responses API to simplify workflows involving tool use, code execution, and state management. We believe this new API primitive will allow us to more effectively enhance the OpenAI platform into the future.


Check out the full documentation

Text generation (Non streamed, Streamed, Multi-turn conversations), Generating Audio Responses with Chat (Audio and Text to Text, Audio to Audio, Audio multi-turn conversations), Vision (Analyze single source, Analyze multi-source, Low or high fidelity image understanding), Reasoning with o1, o3 or o4, Web search...


Check out the full documentation

Text generation (Non streamed, Streamed, Multi-turn conversations), Vision (Analyze single source, Analyze multi-source, Low or high fidelity image understanding), Reasoning with o1, o3 or o4, Web search, File search...



Tips and tricks

  • How to prevent an error when closing an application while requests are still in progress?

Starting from version 1.0.1 of GenAI, the GenAI.Monitoring unit is responsible for monitoring ongoing HTTP requests.

The Monitoring interface is accessible by including the GenAI.Monitoring unit in the uses clause.
Alternatively, you can access it via the HttpMonitoring function, declared in the GenAI unit.

Usage Example

//uses GenAI;

procedure TForm1.FormCloseQuery(Sender: TObject; var CanClose: Boolean);
begin
  CanClose := not HttpMonitoring.IsBusy;
  if not CanClose then
    MessageDLG(
      'Requests are still in progress. Please wait for them to complete before closing the application."',
      TMsgDlgType.mtInformation, [TMsgDlgBtn.mbOK], 0);
end;

  • How to execute multiple background requests to process a batch of responses?

In the GenAI.Chat unit, the CreateParallel method allows for executing multiple prompts asynchronously in the background (since the version 1.0.1 of GenAI).

Among the method's parameters, you can specify the model to be used for the entire batch of prompts. However, assigning a different model to each prompt individually is not supported.

Usage Example

//uses GenAI, GenAI.Types, GenAI.Tutorial.VCL;

  Client.Chat.CreateParallel(
    procedure (Params: TBundleParams)
    begin
      Params.Prompts([
        'How many television channels were there in France in 1980?',
        'How many TV channels were there in Germany in 1980?.'
      ]);
      Params.System('Write the text in capital letters.');
      Params.Model('gpt-4o-mini');
    end,
    function : TAsynBundleList
    begin
      Result.Sender := TutorialHub;

      Result.OnStart :=
        procedure (Sender: TObject)
        begin
          Display(Sender, 'Start the job' + sLineBreak);
        end;

      Result.OnSuccess :=
        procedure (Sender: TObject; Bundle: TBundleList)
        begin
          // Background bundle processing
          for var Item in Bundle.Items do
            begin
              Display(Sender, 'Index : ' + Item.Index.ToString);
              Display(Sender, 'FinishIndex : ' + Item.FinishIndex.ToString);
              Display(Sender, Item.Prompt + sLineBreak);
              Display(Sender, Item.Response + sLineBreak + sLineBreak);
              // or Display(Sender, TChat(Item.Chat).Choices[0].Message.Content);
            end;
        end;

      Result.OnError := Display;
    end);

Tip

The provided example is somewhat simplified. It would be better to adopt this approach with JSON-formatted outputs, as this allows for the implementation of more complex and tailored processing during the final stages.


  • How to structure a chain of thought and develop advanced processing with GenAI?

To achieve this, it is recommended to use a Promise-based pattern to efficiently construct a chain of thought with GenAI. The CerebraChain project offers a method that can be used with GenAI.


  • How do you structure advanced reasoning using Promises and pipelines?

Orchestrate AI thought chains elegantly and efficiently. By leveraging a dynamic pipeline model, a configurable sequential scheduler, and Promises, you can meet the complex requirements of working with modern AI models like OpenAI. Check out the SynkFlow repository.



Deprecated

Deprecation of the OpenAI Assistants API

OpenAI announced the deprecation of the Assistants API on August 26, 2025, with permanent removal scheduled for August 26, 2026. This API is being replaced by the new Responses API and Conversations API, launched in March 2025, which integrate and simplify all functionality previously provided by the Assistants API.

To ensure future compatibility, it is strongly recommended to migrate your integrations to the Responses and Conversations APIs as soon as possible. See the Assistants-to-Conversations migration guide for more details.

Affected units: GenAI.Messages.pas, GenAI.Threads.pas, GenAI.Run.pas, GenAI.RunSteps.pas



Contributing

Pull requests are welcome. If you're planning to make a major change, please open an issue first to discuss your proposed changes.



License

This project is licensed under the MIT License.


About

The GenAI API wrapper for Delphi seamlessly integrates OpenAI’s latest models (o4, gpt-4.1 and gpt-5), delivering robust support for agent chats/responses, text generation, vision, audio analysis, JSON configuration, web search, asynchronous operations, and video (SORA-2, SORA-2-pro). Image generation with gpt-image-1.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages