Skip to content

Job Runner UI Squashed#1144

Draft
dsynkd wants to merge 2 commits intohao-ai-lab:mainfrom
dsynkd:job_runner_ui_squashed
Draft

Job Runner UI Squashed#1144
dsynkd wants to merge 2 commits intohao-ai-lab:mainfrom
dsynkd:job_runner_ui_squashed

Conversation

@dsynkd
Copy link
Contributor

@dsynkd dsynkd commented Feb 28, 2026

This PR adds a UI that allows running jobs for various purposes. Currently only video inference is supported with the following parameters:

  • Inference Steps
  • Frames
  • Height
  • Width
  • Guidance Scale
  • Seed
  • GPUs
  • DiT CPU Offload
  • Text Encoder CPU Offload
  • FSDP Inference

Jobs can be:

  • Created
  • Started
  • Stopped
  • Restarted
  • Deleted

All models that are registered through FastVideo registry are supported.

Currently the following features are included:

  • Console Log
  • Download Log
  • Inference Timer

To run, navigate to the ui directory and run:

npm i
npm run build
npm start

There are two services that start concurrently: the web server and the API server. They can also be started separately using npm run start:api and npm run start:web. In this case, the .env.local file can be used to configure the API server URL (it defaults to http://localhost:8089).

The web server can also be run in dev mode using npm run dev. This will automatically restart the server upon any changes made to the UI.

Demo video is shown below:

Screen.Recording.2026-02-28.at.1.31.12.PM.480p.mov

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request delivers a significant new feature: a user-friendly web interface for FastVideo's video generation capabilities. It abstracts the complexities of running video inference jobs into a manageable system, allowing users to easily control and monitor their tasks through a dedicated UI and a robust FastAPI backend. This enhancement greatly improves the accessibility and usability of the FastVideo library for various video generation purposes.

Highlights

  • New Job Runner UI: Introduced a comprehensive web-based UI for managing FastVideo video generation jobs, allowing users to create, start, stop, restart, and delete jobs.
  • API for Job Management: Developed a FastAPI-based backend API to handle job lifecycle, model loading, and video generation, with endpoints for listing models, managing jobs, and accessing logs/outputs.
  • Enhanced Logging and Progress Tracking: Implemented a robust logging system that captures worker process logs, provides real-time progress updates, and allows for log file downloads, crucial for monitoring long-running video inference tasks.
  • Configurable Inference Parameters: The UI supports a wide range of video inference parameters including inference steps, frames, height, width, guidance scale, seed, GPU count, DiT CPU offload, Text Encoder CPU offload, and FSDP inference.
  • Next.js Frontend: The frontend is built with Next.js, providing a modern and responsive interface for interacting with the job runner, featuring live-polling for job status and in-browser video previews.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • .gitignore
    • Added new patterns to ignore Next.js generated files.
  • .python-version
    • Added a file specifying Python version 3.12.
  • fastvideo/entrypoints/video_generator.py
    • Modified the VideoGenerator constructor and from_fastvideo_args method to accept an optional log_queue for forwarding worker logs.
    • Refactored generate_video to use an internal _generate_video_impl and manage log_queue setup and teardown for the executor.
  • fastvideo/registry.py
    • Added a new function get_registered_model_paths to retrieve a list of all registered HuggingFace model paths, useful for UI integration.
  • fastvideo/worker/executor.py
    • Updated the Executor class to include an optional log_queue in its initialization.
  • fastvideo/worker/multiproc_executor.py
    • Introduced logging utilities, including _make_queue_log_handler, to facilitate log forwarding from multiprocessing workers.
    • Modified worker initialization to pass and handle log_queue for capturing worker-specific logs.
    • Enhanced error reporting during worker startup, allowing workers to send detailed error messages and tracebacks to the parent process.
    • Added set_log_queue and clear_log_queue methods to MultiprocExecutor for dynamic log queue management during job execution.
    • Improved shutdown method to check for worker initialization before attempting to shut them down, preventing errors in case of failed startup.
  • pyproject.toml
    • Added fastapi and uvicorn to the project dependencies for the new API server.
  • ui/.gitignore
    • Added standard ignore patterns for a Next.js project, including node_modules, .next, and out directories.
  • ui/README.md
    • Added a comprehensive README for the FastVideo Job Runner UI, detailing quick start instructions, features, API endpoints, and architectural overview.
  • ui/init.py
    • Added SPDX license identifier.
  • ui/main.py
    • Added a main entry point to allow running the UI server directly via python -m ui.
  • ui/eslint.config.mjs
    • Added ESLint configuration for the Next.js frontend, extending eslint-config-next.
  • ui/job_runner.py
    • Added a new module job_runner.py to manage the lifecycle of video generation jobs, including Job dataclass, JobStatus enum, JobLogBuffer for log handling, and JobRunner for job execution and generator caching.
  • ui/next.config.ts
    • Added Next.js configuration, enabling the React Compiler.
  • ui/package.json
    • Added package.json for the Next.js frontend, defining project metadata, scripts (dev, build, start), and dependencies (Next.js, React, Tailwind CSS, ESLint).
  • ui/postcss.config.mjs
    • Added PostCSS configuration for Tailwind CSS integration in the Next.js frontend.
  • ui/public/logo.svg
    • Added the FastVideo logo as an SVG asset.
  • ui/server.py
    • Added a new FastAPI server (server.py) to serve the UI's API endpoints, managing models, jobs, logs, and video outputs.
    • Implemented signal handlers for graceful shutdown and to ignore SIGQUIT from worker processes.
    • Added functionality to create a .env.local file for frontend API URL configuration.
  • ui/src/app/Layout.module.css
    • Added CSS styles for the main layout, section headers, and placeholder text within the UI.
  • ui/src/app/globals.css
    • Added global CSS variables for theming, basic resets, and base styles for body and form elements.
  • ui/src/app/jobs/[id]/page.tsx
    • Added a new Next.js page component to display detailed information for a specific job, including its status, parameters, and progress.
  • ui/src/app/layout.tsx
    • Added the root layout component for the Next.js application, including metadata and the main header with the FastVideo logo.
  • ui/src/app/page.tsx
    • Added the main home page component for the Next.js application, displaying a list of all jobs and providing functionality to create new ones, with real-time polling for job updates.
  • ui/src/components/CreateJobButton.tsx
    • Added a React component for a button that triggers the job creation modal.
  • ui/src/components/CreateJobModal.tsx
    • Added a React component for the modal used to create new jobs, featuring form inputs for model selection and various inference parameters.
  • ui/src/components/JobCard.tsx
    • Added a React component to display individual job cards, including job details, status badges, action buttons (start, stop, delete), elapsed time, and an expandable console log viewer with download functionality.
  • ui/src/components/styles/Badge.module.css
    • Added CSS styles for different job status badges (pending, running, completed, failed, stopped).
  • ui/src/components/styles/Button.module.css
    • Added a comprehensive set of CSS styles for various button types and states (primary, small, start, stop, delete, view, console, active).
  • ui/src/components/styles/Card.module.css
    • Added CSS styles for general card components used throughout the UI.
  • ui/src/components/styles/Console.module.css
    • Added CSS styles for the console output panel, including scrollbar customization.
  • ui/src/components/styles/Form.module.css
    • Added CSS styles for form rows, labels, and the advanced settings section with a grid layout.
  • ui/src/components/styles/Header.module.css
    • Added CSS styles for the application header, including title and subtitle.
  • ui/src/components/styles/JobCard.module.css
    • Added detailed CSS styles for the job card component, covering its layout, header, prompt, metadata, actions, error display, and integrated console.
  • ui/src/components/styles/Modal.module.css
    • Added CSS styles for modal components, including backdrop, content, close button, and form-specific modal adjustments, along with animation keyframes.
  • ui/src/components/styles/ProgressBar.module.css
    • Added CSS styles for the progress bar component, including background, fill, and label.
  • ui/src/components/styles/Toast.module.css
    • Added CSS styles for toast notifications, including container, individual toast appearance, and animation.
  • ui/src/lib/api.ts
    • Added a new TypeScript file defining API client functions for interacting with the FastAPI backend, including fetching models, managing jobs, and retrieving logs.
  • ui/src/lib/types.ts
    • Added a new TypeScript file defining the Job interface, specifying the structure of job data received from the API.
  • ui/tsconfig.json
    • Added TypeScript configuration for the Next.js project, including compiler options, included files, and path aliases.
Activity
  • The pull request introduces a new UI for running jobs, with a demo video provided in the description to showcase its functionality.
  • The author, dsynkd, has implemented both the backend API using FastAPI and the frontend UI using Next.js, along with necessary modifications to the core fastvideo library for logging and model integration.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant new feature: a web UI for running and managing video generation jobs. The backend is built with FastAPI and includes a robust JobRunner with multi-process worker support and detailed logging capabilities. The frontend is a Next.js application.

My review focuses on improving error handling, cross-platform compatibility, and documentation accuracy. I've identified a couple of high-severity issues in the backend related to error aggregation in worker processes and signal handling that could affect robustness. I've also provided suggestions to align the README.md with the implementation to avoid confusion for new users, and a few minor improvements in the frontend code and configuration.

Overall, this is a well-structured and comprehensive addition. The changes to the core library to support logging from worker processes are particularly well-implemented.

Comment on lines +240 to +242
signal.signal(signal.SIGQUIT, handle_sigquit)
if hasattr(signal, "SIGQUIT"): # SIGQUIT might not be available on all platforms (e.g., Windows)
signal.signal(signal.SIGTERM, handle_sigterm)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The signal handler setup for SIGQUIT and SIGTERM is incorrect and will cause issues on different platforms.

  1. signal.signal(signal.SIGQUIT, handle_sigquit) is called before checking if SIGQUIT is available. This will cause an AttributeError on platforms where SIGQUIT is not defined, such as Windows.
  2. The registration of the SIGTERM handler is inside the if hasattr(signal, "SIGQUIT") block. This means if SIGQUIT is not available, the SIGTERM handler will also not be registered, preventing graceful shutdown on SIGTERM.

The SIGQUIT registration should be guarded, and the SIGTERM registration should be unconditional (as SIGTERM is standard).

Suggested change
signal.signal(signal.SIGQUIT, handle_sigquit)
if hasattr(signal, "SIGQUIT"): # SIGQUIT might not be available on all platforms (e.g., Windows)
signal.signal(signal.SIGTERM, handle_sigterm)
if hasattr(signal, "SIGQUIT"): # SIGQUIT might not be available on all platforms (e.g., Windows)
signal.signal(signal.SIGQUIT, handle_sigquit)
signal.signal(signal.SIGTERM, handle_sigterm)

Comment on lines +1 to +101
# FastVideo Job Runner UI

A lightweight web-based UI for creating and managing FastVideo video generation
jobs.

## Quick Start

First run the API server:

```bash
python -m ui.api_server --output-dir /path/to/videos --log-dir /path/to/logs
```

The API server starts running at [http://localhost:8188](http://localhost:8188) by default. You can
configure this using the `--api-url` parameter.

Now you have to configure the environment file to include the API server path.

```bash
cd frontend
cp .env.example .env.local
# Edit the file to set the API server path
```

Run the web server:

```bash
npm i && npm run dev
```

## Features

- Select from supported FastVideo text-to-video models
- Enter a prompt and configure generation parameters (steps, frames, resolution,
guidance scale, seed, GPU count)
- Create, start, stop, and delete jobs via the UI
- Live-polling job status updates
- In-browser video preview for completed jobs
- Generated videos are saved to a configurable output directory

## API Endpoints

| Method | Path | Description |
| -------- | ---------------------------- | ---------------------------------- |
| `GET` | `/api/models` | List available models |
| `GET` | `/api/jobs` | List all jobs (newest first) |
| `GET` | `/api/jobs/{id}` | Get a single job's details |
| `POST` | `/api/jobs` | Create a new job |
| `POST` | `/api/jobs/{id}/start` | Start a pending/stopped/failed job |
| `POST` | `/api/jobs/{id}/stop` | Request a running job to stop |
| `DELETE` | `/api/jobs/{id}` | Delete a job |
| `GET` | `/api/jobs/{id}/video` | Stream the generated video/image |
| `GET` | `/api/jobs/{id}/log` | Download the job's log file |

### Create Job Request Body

```json
{
"model_id": "Wan-AI/Wan2.1-T2V-1.3B-Diffusers",
"prompt": "A curious raccoon in a sunflower field",
"num_inference_steps": 50,
"num_frames": 81,
"height": 480,
"width": 832,
"guidance_scale": 5.0,
"seed": 1024,
"num_gpus": 1
}
```

## Architecture

```
ui/
├── server.py # Combined FastAPI server (API + static files)
├── api_server.py # API-only server (REST endpoints)
├── web_server.py # Web-only server (static files + optional API proxy)
├── requirements.txt # Python dependencies (fastapi, uvicorn, httpx)
└── static/
├── index.html # Single-page application
├── style.css # Dark-themed responsive styles
└── app.js # Frontend logic (fetch API, polling, rendering)
```

- **API Server** (`api_server.py`): A FastAPI server that manages an in-memory job store. Each job runs
in a daemon thread that uses `fastvideo.VideoGenerator` to generate videos.
Model instances are cached so switching between prompts on the same model
doesn't reload weights. Provides REST endpoints under `/api/*`.
- **Error Handling**: Jobs that crash are automatically marked as `FAILED` without
crashing the server. Error details are stored in the job's `error` field.
- **Log Files**: Each job maintains a persistent log file (`{job_id}.log`) in a
dedicated log directory (configurable via `--log-dir`), containing all logs from
model loading through completion or failure. Log files are named after the job ID
for easy identification.
- **Web Server** (`web_server.py`): Serves static HTML/CSS/JS files. Optionally proxies API requests
to a separate API server or relies on CORS for cross-origin requests.
- **Combined Server** (`server.py`): Legacy combined server that serves both API and static files
from a single process. Use this for simple deployments.
- **Frontend**: A vanilla HTML/CSS/JS single-page app. Jobs are polled every
2 seconds and rendered as cards with status badges and action buttons. The API
base URL can be configured via a meta tag injected by the web server.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The documentation in this README appears to be out of sync with the implementation in several places, which could cause confusion for new users trying to run the UI.

Here are some specific inconsistencies:

  • Server filename: The docs refer to api_server.py and web_server.py, but the actual API server implementation is in server.py.
  • Directory structure: The "Quick Start" guide instructs users to cd frontend, but the Next.js application is located directly in the ui/ directory, not a frontend/ subdirectory.
  • Running the app: The package.json provides a start script using concurrently to run both the API and web servers, which is a more convenient way to start the application than the separate commands listed. The start:api script also uses python -m ui.server, not ui.api_server.
  • Architecture description: The description of the frontend as a "vanilla HTML/CSS/JS single-page app" in the static/ directory is inaccurate. The implementation uses Next.js, a React framework.
  • Typo: On line 53, there are trailing spaces after the endpoint path: /api/jobs/{id}/log.

Updating the README to reflect the current project structure and commands would greatly improve the developer experience.

@@ -0,0 +1,30 @@
{
"name": "frontend",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The package name is set to "frontend", which is quite generic. Consider renaming it to something more specific to this project, like "fastvideo-ui", to avoid potential naming conflicts and improve clarity.

Suggested change
"name": "frontend",
"name": "fastvideo-ui",

Comment on lines +179 to +181
// Poll every 500ms when job is running, every 2s when pending
const pollIntervalMs = 2000;
pollInterval = setInterval(pollLogs, pollIntervalMs);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment on line 179 states that the polling interval is "500ms when job is running, every 2s when pending", but the code on line 180 uses a hardcoded interval of 2000ms for all pollable statuses. To match the intended behavior described in the comment, you could adjust the polling interval based on the job status.

Suggested change
// Poll every 500ms when job is running, every 2s when pending
const pollIntervalMs = 2000;
pollInterval = setInterval(pollLogs, pollIntervalMs);
// Poll every 500ms when job is running, every 2s when pending
const pollIntervalMs = job.status === 'running' ? 500 : 2000;
pollInterval = setInterval(pollLogs, pollIntervalMs);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant