Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request delivers a significant new feature: a user-friendly web interface for FastVideo's video generation capabilities. It abstracts the complexities of running video inference jobs into a manageable system, allowing users to easily control and monitor their tasks through a dedicated UI and a robust FastAPI backend. This enhancement greatly improves the accessibility and usability of the FastVideo library for various video generation purposes. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a significant new feature: a web UI for running and managing video generation jobs. The backend is built with FastAPI and includes a robust JobRunner with multi-process worker support and detailed logging capabilities. The frontend is a Next.js application.
My review focuses on improving error handling, cross-platform compatibility, and documentation accuracy. I've identified a couple of high-severity issues in the backend related to error aggregation in worker processes and signal handling that could affect robustness. I've also provided suggestions to align the README.md with the implementation to avoid confusion for new users, and a few minor improvements in the frontend code and configuration.
Overall, this is a well-structured and comprehensive addition. The changes to the core library to support logging from worker processes are particularly well-implemented.
| signal.signal(signal.SIGQUIT, handle_sigquit) | ||
| if hasattr(signal, "SIGQUIT"): # SIGQUIT might not be available on all platforms (e.g., Windows) | ||
| signal.signal(signal.SIGTERM, handle_sigterm) |
There was a problem hiding this comment.
The signal handler setup for SIGQUIT and SIGTERM is incorrect and will cause issues on different platforms.
signal.signal(signal.SIGQUIT, handle_sigquit)is called before checking ifSIGQUITis available. This will cause anAttributeErroron platforms whereSIGQUITis not defined, such as Windows.- The registration of the
SIGTERMhandler is inside theif hasattr(signal, "SIGQUIT")block. This means ifSIGQUITis not available, theSIGTERMhandler will also not be registered, preventing graceful shutdown onSIGTERM.
The SIGQUIT registration should be guarded, and the SIGTERM registration should be unconditional (as SIGTERM is standard).
| signal.signal(signal.SIGQUIT, handle_sigquit) | |
| if hasattr(signal, "SIGQUIT"): # SIGQUIT might not be available on all platforms (e.g., Windows) | |
| signal.signal(signal.SIGTERM, handle_sigterm) | |
| if hasattr(signal, "SIGQUIT"): # SIGQUIT might not be available on all platforms (e.g., Windows) | |
| signal.signal(signal.SIGQUIT, handle_sigquit) | |
| signal.signal(signal.SIGTERM, handle_sigterm) |
| # FastVideo Job Runner UI | ||
|
|
||
| A lightweight web-based UI for creating and managing FastVideo video generation | ||
| jobs. | ||
|
|
||
| ## Quick Start | ||
|
|
||
| First run the API server: | ||
|
|
||
| ```bash | ||
| python -m ui.api_server --output-dir /path/to/videos --log-dir /path/to/logs | ||
| ``` | ||
|
|
||
| The API server starts running at [http://localhost:8188](http://localhost:8188) by default. You can | ||
| configure this using the `--api-url` parameter. | ||
|
|
||
| Now you have to configure the environment file to include the API server path. | ||
|
|
||
| ```bash | ||
| cd frontend | ||
| cp .env.example .env.local | ||
| # Edit the file to set the API server path | ||
| ``` | ||
|
|
||
| Run the web server: | ||
|
|
||
| ```bash | ||
| npm i && npm run dev | ||
| ``` | ||
|
|
||
| ## Features | ||
|
|
||
| - Select from supported FastVideo text-to-video models | ||
| - Enter a prompt and configure generation parameters (steps, frames, resolution, | ||
| guidance scale, seed, GPU count) | ||
| - Create, start, stop, and delete jobs via the UI | ||
| - Live-polling job status updates | ||
| - In-browser video preview for completed jobs | ||
| - Generated videos are saved to a configurable output directory | ||
|
|
||
| ## API Endpoints | ||
|
|
||
| | Method | Path | Description | | ||
| | -------- | ---------------------------- | ---------------------------------- | | ||
| | `GET` | `/api/models` | List available models | | ||
| | `GET` | `/api/jobs` | List all jobs (newest first) | | ||
| | `GET` | `/api/jobs/{id}` | Get a single job's details | | ||
| | `POST` | `/api/jobs` | Create a new job | | ||
| | `POST` | `/api/jobs/{id}/start` | Start a pending/stopped/failed job | | ||
| | `POST` | `/api/jobs/{id}/stop` | Request a running job to stop | | ||
| | `DELETE` | `/api/jobs/{id}` | Delete a job | | ||
| | `GET` | `/api/jobs/{id}/video` | Stream the generated video/image | | ||
| | `GET` | `/api/jobs/{id}/log` | Download the job's log file | | ||
|
|
||
| ### Create Job Request Body | ||
|
|
||
| ```json | ||
| { | ||
| "model_id": "Wan-AI/Wan2.1-T2V-1.3B-Diffusers", | ||
| "prompt": "A curious raccoon in a sunflower field", | ||
| "num_inference_steps": 50, | ||
| "num_frames": 81, | ||
| "height": 480, | ||
| "width": 832, | ||
| "guidance_scale": 5.0, | ||
| "seed": 1024, | ||
| "num_gpus": 1 | ||
| } | ||
| ``` | ||
|
|
||
| ## Architecture | ||
|
|
||
| ``` | ||
| ui/ | ||
| ├── server.py # Combined FastAPI server (API + static files) | ||
| ├── api_server.py # API-only server (REST endpoints) | ||
| ├── web_server.py # Web-only server (static files + optional API proxy) | ||
| ├── requirements.txt # Python dependencies (fastapi, uvicorn, httpx) | ||
| └── static/ | ||
| ├── index.html # Single-page application | ||
| ├── style.css # Dark-themed responsive styles | ||
| └── app.js # Frontend logic (fetch API, polling, rendering) | ||
| ``` | ||
|
|
||
| - **API Server** (`api_server.py`): A FastAPI server that manages an in-memory job store. Each job runs | ||
| in a daemon thread that uses `fastvideo.VideoGenerator` to generate videos. | ||
| Model instances are cached so switching between prompts on the same model | ||
| doesn't reload weights. Provides REST endpoints under `/api/*`. | ||
| - **Error Handling**: Jobs that crash are automatically marked as `FAILED` without | ||
| crashing the server. Error details are stored in the job's `error` field. | ||
| - **Log Files**: Each job maintains a persistent log file (`{job_id}.log`) in a | ||
| dedicated log directory (configurable via `--log-dir`), containing all logs from | ||
| model loading through completion or failure. Log files are named after the job ID | ||
| for easy identification. | ||
| - **Web Server** (`web_server.py`): Serves static HTML/CSS/JS files. Optionally proxies API requests | ||
| to a separate API server or relies on CORS for cross-origin requests. | ||
| - **Combined Server** (`server.py`): Legacy combined server that serves both API and static files | ||
| from a single process. Use this for simple deployments. | ||
| - **Frontend**: A vanilla HTML/CSS/JS single-page app. Jobs are polled every | ||
| 2 seconds and rendered as cards with status badges and action buttons. The API | ||
| base URL can be configured via a meta tag injected by the web server. |
There was a problem hiding this comment.
The documentation in this README appears to be out of sync with the implementation in several places, which could cause confusion for new users trying to run the UI.
Here are some specific inconsistencies:
- Server filename: The docs refer to
api_server.pyandweb_server.py, but the actual API server implementation is inserver.py. - Directory structure: The "Quick Start" guide instructs users to
cd frontend, but the Next.js application is located directly in theui/directory, not afrontend/subdirectory. - Running the app: The
package.jsonprovides astartscript usingconcurrentlyto run both the API and web servers, which is a more convenient way to start the application than the separate commands listed. Thestart:apiscript also usespython -m ui.server, notui.api_server. - Architecture description: The description of the frontend as a "vanilla HTML/CSS/JS single-page app" in the
static/directory is inaccurate. The implementation uses Next.js, a React framework. - Typo: On line 53, there are trailing spaces after the endpoint path:
/api/jobs/{id}/log.
Updating the README to reflect the current project structure and commands would greatly improve the developer experience.
| @@ -0,0 +1,30 @@ | |||
| { | |||
| "name": "frontend", | |||
There was a problem hiding this comment.
| // Poll every 500ms when job is running, every 2s when pending | ||
| const pollIntervalMs = 2000; | ||
| pollInterval = setInterval(pollLogs, pollIntervalMs); |
There was a problem hiding this comment.
The comment on line 179 states that the polling interval is "500ms when job is running, every 2s when pending", but the code on line 180 uses a hardcoded interval of 2000ms for all pollable statuses. To match the intended behavior described in the comment, you could adjust the polling interval based on the job status.
| // Poll every 500ms when job is running, every 2s when pending | |
| const pollIntervalMs = 2000; | |
| pollInterval = setInterval(pollLogs, pollIntervalMs); | |
| // Poll every 500ms when job is running, every 2s when pending | |
| const pollIntervalMs = job.status === 'running' ? 500 : 2000; | |
| pollInterval = setInterval(pollLogs, pollIntervalMs); |
This PR adds a UI that allows running jobs for various purposes. Currently only video inference is supported with the following parameters:
Jobs can be:
All models that are registered through FastVideo registry are supported.
Currently the following features are included:
To run, navigate to the
uidirectory and run:There are two services that start concurrently: the web server and the API server. They can also be started separately using
npm run start:apiandnpm run start:web. In this case, the.env.localfile can be used to configure the API server URL (it defaults tohttp://localhost:8089).The web server can also be run in dev mode using
npm run dev. This will automatically restart the server upon any changes made to the UI.Demo video is shown below:
Screen.Recording.2026-02-28.at.1.31.12.PM.480p.mov