Skip to content

Pin huggingface_hub to latest version 1.5.0#748

Open
pyup-bot wants to merge 1 commit intomasterfrom
pyup-pin-huggingface_hub-1.5.0
Open

Pin huggingface_hub to latest version 1.5.0#748
pyup-bot wants to merge 1 commit intomasterfrom
pyup-pin-huggingface_hub-1.5.0

Conversation

@pyup-bot
Copy link
Collaborator

This PR pins huggingface_hub to the latest release 1.5.0.

Changelog

1.5.0

This release introduces major new features including **Buckets** (xet-based large scale object storage), CLI Extensions, Space Hot-Reload, and significant improvements for AI coding agents. The CLI has been completely overhauled with centralized error handling, better help output, and new commands for collections, papers, and more.

🪣 Buckets: S3-like Object Storage on the Hub

Buckets provide S3-like object storage on Hugging Face, powered by the Xet storage backend. Unlike repositories (which are git-based and track file history), buckets are remote object storage containers designed for large-scale files with content-addressable deduplication. Use them for training checkpoints, logs, intermediate artifacts, or any large collection of files that doesn't need version control.

bash
Create a bucket
hf buckets create my-bucket --private

Upload a directory
hf buckets sync ./data hf://buckets/username/my-bucket

Download from bucket
hf buckets sync hf://buckets/username/my-bucket ./data

List files
hf buckets list username/my-bucket -R --tree


The Buckets API includes full CLI and Python support for creating, listing, moving, and deleting buckets; uploading, downloading, and syncing files; and managing bucket contents with include/exclude patterns.

- Buckets API and CLI by Wauplin in 3673
- Support bucket rename/move in API + CLI by Wauplin in 3843
- Add 'sync_bucket' to HfApi by Wauplin in 3845
- hf buckets file deletion by Wauplin in 3849
- Update message when no buckets found by Wauplin in 3850
- Buckets doc `hf` install by julien-c in 3846

📚 **Documentation:** [Buckets guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/buckets)


🤖 AI Agent Support

This release includes several features designed to improve the experience for AI coding agents (Claude Code, OpenCode, Cursor, etc.):

- **Centralized CLI error handling**: Clean user-facing messages without tracebacks (set `HF_DEBUG=1` for full traces) by hanouticelina in 3754
- **Token-efficient skill**: The `hf skills add` command now installs a compact skill (~1.2k tokens vs ~12k before) by hanouticelina in 3802
- **Agent-friendly `hf jobs logs`**: Prints available logs and exits by default; use `-f` to stream by davanstrien in 3783
- **Add AGENTS.md**: Dev setup and codebase guide for AI agents by Wauplin in 3789

bash
Install the hf-cli skill for Claude
hf skills add --claude

Install for project-level
hf skills add --project


- Add `hf skills add` CLI command by julien-c in 3741
- `hf skills add` installs to central location with symlinks by hanouticelina in 3755
- Add Cursor skills support by NielsRogge in 3810

🔥 Space Hot-Reload (Experimental)

Hot-reload Python files in a Space without a full rebuild and restart. This is useful for rapid iteration on Gradio apps.

bash
Open an interactive editor to modify a remote file
hf spaces hot-reload username/repo-name app.py

Take local version and patch remote
hf spaces hot-reload username/repo-name -f app.py


- feat(spaces): hot-reload by cbensimon in 3776
- fix hot reload reference part.2 by cbensimon in 3820


🖥️ CLI Improvements

New Commands

- Add `hf papers ls` to list daily papers on the Hub by julien-c in 3723
- Add `hf collections` commands (ls, info, create, update, delete, add-item, update-item, delete-item) by Wauplin in 3767

CLI Extensions

Introduce an extension mechanism to the `hf` CLI. Extensions are standalone executables hosted in GitHub repositories that users can install, run, and remove with simple commands. Inspired by `gh extension`.

bash
Install an extension (defaults to huggingface org)
hf extensions install hf-claude

Install from any GitHub owner
hf extensions install hanouticelina/hf-claude

Run an extension
hf claude

List installed extensions
hf extensions list


- Add `hf extension` by hanouticelina in 3805
- Add `hf ext` alias by hanouticelina in 3836

Output Format Options

- Add `--format {table,json}` and `-q/--quiet` to `hf models ls`, `hf datasets ls`, `hf spaces ls`, `hf endpoints ls` by hanouticelina in 3735
- Align `hf jobs ps` output with standard CLI pattern by davanstrien in 3799
- Dynamic table columns based on `--expand` field by hanouticelina in 3760

Usability

- Improve `hf` CLI help output with examples and documentation links by hanouticelina in 3743
- Add `-h` as short alias for `--help` by assafvayner in 3800
- Add hidden `--version` flag by Wauplin in 3784
- Add `--type` as alias for `--repo-type` by Wauplin in 3835
- Better handling of aliases in documentation by Wauplin in 3840
- Print first example only in group command --help by Wauplin in 3841
- Subfolder download: `hf download repo_id subfolder/` now works as expected by Wauplin in 3822

Jobs CLI

List available hardware:

bash
✗ hf jobs hardware
NAME            PRETTY NAME            CPU      RAM     ACCELERATOR       COST/MIN COST/HOUR 
--------------- ---------------------- -------- ------- ----------------- -------- --------- 
cpu-basic       CPU Basic              2 vCPU   16 GB   N/A               $0.0002  $0.01     
cpu-upgrade     CPU Upgrade            8 vCPU   32 GB   N/A               $0.0005  $0.03     
cpu-performance CPU Performance        32 vCPU  256 GB  N/A               $0.3117  $18.70    
cpu-xl          CPU XL                 16 vCPU  124 GB  N/A               $0.0167  $1.00     
t4-small        Nvidia T4 - small      4 vCPU   15 GB   1x T4 (16 GB)     $0.0067  $0.40     
t4-medium       Nvidia T4 - medium     8 vCPU   30 GB   1x T4 (16 GB)     $0.0100  $0.60     
a10g-small      Nvidia A10G - small    4 vCPU   15 GB   1x A10G (24 GB)   $0.0167  $1.00  
...


Also added a ton of fixes and small QoL improvements.

- Support multi GPU training commands (`torchrun`, `accelerate launch`) by lhoestq in 3674
- Pass local script and config files to job by lhoestq in 3724
- List available hardware with `hf jobs hardware` by Wauplin in 3693
- Better jobs filtering in CLI: labels and negation (`!=`) by lhoestq in 3742
- Accept namespace/job_id format in jobs CLI commands by davanstrien in 3811
- Pass namespace parameter to fetch job logs by Praful932 in 3736
- Add more error handling output to hf jobs cli commands by davanstrien in 3744
- Fix `hf jobs` commands crashing without a TTY by davanstrien in 3782

🤖 Inference

- Add `dimensions` & `encoding_format` parameter to InferenceClient for output embedding size by mishig25 in 3671
- feat: zai-org provider supports text to image by tomsun28 in 3675
- Fix fal image urls payload by hanouticelina in 3746
- Fix Replicate `image-to-image` compatibility with different model schemas by hanouticelina in 3749
- Accelerator parameter support for inference endpoints by Wauplin in 3817

🔧 Other QoL Improvements

- Support setting Label in Jobs API by Wauplin in 3719
- Document built-in environment variables in Jobs docs (JOB_ID, ACCELERATOR, CPU_CORES, MEMORY) by Wauplin in 3834
- Fix ReadTimeout crash in no-follow job logs by davanstrien in 3793
- Add evaluation results module (`EvalResultEntry`, `parse_eval_result_entries`) by hanouticelina in 3633
- Add source org field to `EvalResultEntry` by hanouticelina in 3694
- Add limit param to list_papers API method by Wauplin in 3697
- Add `num_papers` field to Organization class by cfahlgren1 in 3695
- Update MAX_FILE_SIZE_GB from 50 to 200 by davanstrien in 3696
- List datasets benchmark alias (`benchmark=True` → `benchmark="official"`) by Wauplin in 3734
- Add notes field to `EvalResultEntry` by Wauplin in 3738
- Make `task_id` required in `EvalResultEntry` by Wauplin in 3718
- Repo commit count warning for `upload_large_folder` by Wauplin in 3698
- Replace deprecated is_enterprise boolean by `plan` string in org info by Wauplin in 3753
- Update hardware list in SpaceHardware enum by lhoestq in 3756
- Use HF_HUB_DOWNLOAD_TIMEOUT as default httpx timeout by Wauplin in 3751
- No timeout by default when using httpx by Wauplin in 3790
- Log 'x-amz-cf-id' on http error (if no request id) by Wauplin in 3759
- Parse xet hash from tree listing by seanses in 3780
- Require filelock>=3.10.0 for `mode=` parameter support by Wauplin in 3785
- Add overload decorators to `HfApi.snapshot_download` for dry_run typing by Wauplin in 3788
- Dataclass doesn't call original `__init__` by zucchini-nlp in 3818
- Strict dataclass sequence validation by Wauplin in 3819
- Check if `dataclass.repr=True` before wrapping by zucchini-nlp in 3823

💔 Breaking Changes

- `hf jobs ps` removes old Go-template `--format '{{.id}}'` syntax. Use `-q` for IDs or `--format json | jq` for custom extraction by davanstrien in 3799
- Migrate to `hf repos` instead of `hf repo` (old command still works but shows deprecation warning) by Wauplin in 3848
- Migrate `hf repo-files delete` to `hf repo delete-files` (old command hidden from help, shows deprecation warning) by Wauplin in 3821

🐛 Bug and typo fixes

- Fix severe performance regression in streaming by keeping a byte iterator in HfFileSystemStreamFile by leq6c in 3685
- Fix endpoint not forwarded in CommitUrl by Wauplin in 3679
- Fix `HfFileSystem.resolve_path()` with special char `` by lhoestq in 3704
- Fix cache verify incorrectly reporting folders as missing files by Mitix-EPI in 3707
- Fix multi user cache lock permissions by hanouticelina in 3714
- Default _endpoint to None in CommitInfo, fixes tiny regression from v1.3.3 by tomaarsen in 3737
- Filter datasets by benchmark:official by Wauplin in 3761
- Fix file corruption when server ignores Range header on download retry by XciD in 3778
- Fix Xet token invalid on repo recreation by Wauplin in 3847
- Correct typo 'occured' to 'occurred' by thecaptain789 in 3787
- Fix typo in CLI error handling by hanouticelina in 3757

📖 Documentation

- Add link to Hub Jobs documentation by gary149 in 3712
- Update HTTP backend configuration link to main branch by IliasAarab in 3713
- Update CLI help output in docs to include new commands by julien-c in 3722
- Wildcard pattern documentation by hanouticelina in 3710
- Deprecate `hf_transfer` references in Korean and German translations by davanstrien in 3804
- Use SPDX license identifier 'Apache-2.0' by yesudeep in 3814
- Correct img tag style in README.md by sadnesslovefreedom-debug in 3689

🏗️ Internal

- Change external dependency from `typer-slim` to `typer` by svlandeg in 3797
- Remove `shellingham` from the required dependencies by hanouticelina in 3798
- Ignore `unused-ignore-comment` warnings in `ty` for `mypy` compatibility by hanouticelina in 3691
- Remove new `unused-type-ignore-comment` warning from `ty` by hanouticelina in 3803
- Fix curlify when debug logging is enabled for streaming requests by hanouticelina in 3692
- Remove canonical dataset test case by hanouticelina in 3740
- Remove broad exception handling from CLI job commands by hanouticelina in 3748
- CI windows permission error by Wauplin in 3700
- Upgrade GitHub Actions to latest versions by salmanmkc in 3729
- Stabilize lockfile test in `file_download` tests by hanouticelina in 3815
- Fix ty invalid assignment in `CollectionItem` by hanouticelina in 3831
- Use `inference_provider` instead of `inference` in tests by hanouticelina in 3826
- Fix tqdm windows test failure by Wauplin in 3844
- Add test for check if dataclass.repr=True before wrapping by Wauplin in 3852
- Prepare for v1.5 by Wauplin in 3781

1.4.1

Fix file corruption when server ignores Range header on download retry.
Full details in https://github.com/huggingface/huggingface_hub/pull/3778 by XciD.

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v1.4.0...v1.4.1

1.4.0

🧠 `hf skills add` CLI Command

A new `hf skills add` command installs the `hf-cli` skill for AI coding assistants (Claude Code, Codex, OpenCode). Your AI Agent now knows how to search the Hub, download models, run Jobs, manage repos, and more.

console
> hf skills add --help
Usage: hf skills add [OPTIONS]

Download a skill and install it for an AI assistant.

Options:
--claude      Install for Claude.
--codex       Install for Codex.
--opencode    Install for OpenCode.
-g, --global  Install globally (user-level) instead of in the current
             project directory.
--dest PATH   Install into a custom destination (path to skills directory).
--force       Overwrite existing skills in the destination.
--help        Show this message and exit.

Examples
$ hf skills add --claude
$ hf skills add --claude --global
$ hf skills add --codex --opencode

Learn more
Use `hf <command> --help` for more information about a command.
Read the documentation at
https://huggingface.co/docs/huggingface_hub/en/guides/cli


The skill is composed of two files fetched from the `huggingface_hub docs`: a CLI guide (`SKILL.md`) and the full CLI reference (`references/cli.md`). Files are installed to a central `.agents/skills/hf-cli/` directory, and relative symlinks are created from agent-specific directories (e.g., `.claude/skills/hf-cli/` → `../../.agents/skills/hf-cli/`). This ensures a single source of truth when installing for multiple agents.

- Add `hf skills add` CLI command by julien-c in 3741
- [CLI] `hf skills add` installs hf-cli skill to central location with symlinks by hanouticelina in 3755

🖥️ Improved CLI Help Output

The CLI help output has been reorganized to be more informative and agent-friendly:

- Commands are now grouped into **Main commands** and **Help commands**
- **Examples** section showing common usage patterns
- **Learn more** section with links to documentation

console
> hf cache --help
Usage: hf cache [OPTIONS] COMMAND [ARGS]...

Manage local cache directory.

Options:
--help  Show this message and exit.

Main commands:
ls      List cached repositories or revisions.
prune   Remove detached revisions from the cache.
rm      Remove cached repositories or revisions.
verify  Verify checksums for a single repo revision from cache or a local
       directory.

Examples
$ hf cache ls
$ hf cache ls --revisions
$ hf cache ls --filter "size>1GB" --limit 20
$ hf cache ls --format json
$ hf cache prune
$ hf cache prune --dry-run
$ hf cache rm model/gpt2
$ hf cache rm <revision_hash>
$ hf cache rm model/gpt2 --dry-run
$ hf cache rm model/gpt2 --yes
$ hf cache verify gpt2
$ hf cache verify gpt2 --revision refs/pr/1
$ hf cache verify my-dataset --repo-type dataset

Learn more
Use `hf <command> --help` for more information about a command.
Read the documentation at
https://huggingface.co/docs/huggingface_hub/en/guides/cli


- [CLI] improve `hf` CLI help output by hanouticelina in 3743

📊 Evaluation Results Module

The Hub now has a decentralized system for tracking model evaluation results. Benchmark datasets (like [MMLU-Pro](https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro), [HLE](https://huggingface.co/datasets/cais/hle), [GPQA](https://huggingface.co/datasets/Idavidrein/gpqa)) host leaderboards, and model repos store evaluation scores in `.eval_results/*.yaml` files. These results automatically appear on both the model page and the benchmark's leaderboard. See the [Evaluation Results documentation](https://huggingface.co/docs/hub/eval-results) for more details.

We added helpers in `huggingface_hub` to work with this format:

- `EvalResultEntry` dataclass representing evaluation scores
- `eval_result_entries_to_yaml()` to serialize entries to YAML format
- `parse_eval_result_entries()` to parse YAML data back into `EvalResultEntry` objects

python
import yaml
from huggingface_hub import EvalResultEntry, eval_result_entries_to_yaml, upload_file

entries = [
 EvalResultEntry(dataset_id="cais/hle", task_id="default", value=20.90),
 EvalResultEntry(dataset_id="Idavidrein/gpqa", task_id="gpqa_diamond", value=0.412),
]
yaml_content = yaml.dump(eval_result_entries_to_yaml(entries))
upload_file(
 path_or_fileobj=yaml_content.encode(),
 path_in_repo=".eval_results/results.yaml",
 repo_id="your-username/your-model",
)


- Add evaluation results module by hanouticelina in 3633
- Eval results synchronization by Wauplin in 3718
- Eval results notes by Wauplin in 3738

🖥️ Other CLI Improvements

New `hf papers ls` command to list daily papers on the Hub, with support for filtering by date and sorting by trending or publication date.

console
hf papers ls                        List most recent daily papers
hf papers ls --sort=trending        List trending papers
hf papers ls --date=2025-01-23      List papers from a specific date
hf papers ls --date=today           List today's papers


- Add `hf papers ls` CLI command by julien-c in 3723

New `hf collections` commands for managing collections from the CLI:

console
List collections
hf collections ls --owner nvidia --limit 5
hf collections ls --sort trending

Create a collection
hf collections create "My Models" --description "Favorites" --private

Add items
hf collections add-item user/my-coll models/gpt2 model
hf collections add-item user/my-coll datasets/squad dataset --note "QA dataset"

Get info
hf collections info user/my-coll

Delete
hf collections delete user/my-coll


- [CLI] Add `hf collections` commands by Wauplin in 3767

Other CLI-related improvements:

- [CLI] output format option for ls CLIs by hanouticelina in 3735
- [CLI] Dynamic table columns based on `--expand` field by hanouticelina in 3760
- [CLI] Adds centralized error handling by hanouticelina in 3754
- [CLI] exception handling scope by hanouticelina in 3748
- Update CLI help output in docs to include new commands by julien-c in 3722

📊 Jobs

Multi-GPU training commands are now supported with `torchrun` and `accelerate launch`:

bash
> hf jobs uv run --with torch -- torchrun train.py
> hf jobs uv run --with accelerate -- accelerate launch train.py


You can also pass local config files alongside your scripts:

bash
> hf jobs uv run script.py config.yml
> hf jobs uv run --with torch torchrun script.py config.yml


New `hf jobs hardware` command to list available hardware options:

console
> hf jobs hardware
NAME         PRETTY NAME            CPU      RAM     ACCELERATOR      COST/MIN COST/HOUR 
------------ ---------------------- -------- ------- ---------------- -------- --------- 
cpu-basic    CPU Basic              2 vCPU   16 GB   N/A              $0.0002  $0.01     
cpu-upgrade  CPU Upgrade            8 vCPU   32 GB   N/A              $0.0005  $0.03     
t4-small     Nvidia T4 - small      4 vCPU   15 GB   1x T4 (16 GB)    $0.0067  $0.40     
t4-medium    Nvidia T4 - medium     8 vCPU   30 GB   1x T4 (16 GB)    $0.0100  $0.60     
a10g-small   Nvidia A10G - small    4 vCPU   15 GB   1x A10G (24 GB)  $0.0167  $1.00     
a10g-large   Nvidia A10G - large    12 vCPU  46 GB   1x A10G (24 GB)  $0.0250  $1.50     
a10g-largex2 2x Nvidia A10G - large 24 vCPU  92 GB   2x A10G (48 GB)  $0.0500  $3.00     
a10g-largex4 4x Nvidia A10G - large 48 vCPU  184 GB  4x A10G (96 GB)  $0.0833  $5.00     
a100-large   Nvidia A100 - large    12 vCPU  142 GB  1x A100 (80 GB)  $0.0417  $2.50     
a100x4       4x Nvidia A100         48 vCPU  568 GB  4x A100 (320 GB) $0.1667  $10.00    
a100x8       8x Nvidia A100         96 vCPU  1136 GB 8x A100 (640 GB) $0.3333  $20.00    
l4x1         1x Nvidia L4           8 vCPU   30 GB   1x L4 (24 GB)    $0.0133  $0.80     
l4x4         4x Nvidia L4           48 vCPU  186 GB  4x L4 (96 GB)    $0.0633  $3.80     
l40sx1       1x Nvidia L40S         8 vCPU   62 GB   1x L40S (48 GB)  $0.0300  $1.80     
l40sx4       4x Nvidia L40S         48 vCPU  382 GB  4x L40S (192 GB) $0.1383  $8.30     
l40sx8       8x Nvidia L40S         192 vCPU 1534 GB 8x L40S (384 GB) $0.3917  $23.50  


Better filtering with label support and negation:

bash
> hf jobs ps -a --filter status!=error
> hf jobs ps -a --filter label=fine-tuning
> hf jobs ps -a --filter label=model=Qwen3-06B


- [Jobs] Support multi gpu training commands by lhoestq in 3674
- [Jobs] List available hardware by Wauplin in 3693
- [Jobs] Better jobs filtering in CLI: labels and negation by lhoestq in 3742
- Pass local script and config files to job by lhoestq in 3724
- Support setting Label in Jobs API by Wauplin in 3719
- Pass namespace parameter to fetch job logs in jobs CLI by Praful932 in 3736
- Add more error handling output to hf jobs cli commands by davanstrien in 3744

⚡️ Inference

- Add dimensions & encoding_format parameter to InferenceClient for output embedding size by mishig25 in 3671
- feat: zai-org provider supports text to image by tomsun28 in 3675
- [Inference Providers] fix fal image urls payload by hanouticelina in 3746
- Fix Replicate image-to-image compatibility with different model schemas by hanouticelina in 3749

🔧 QoL Improvements

- add source org field by hanouticelina in 3694
- add num_papers field to Organization class by cfahlgren1 in 3695
- Add limit param to list_papers API method by Wauplin in 3697
- Repo commit count warning by Wauplin in 3698
- List datasets benchmark alias by Wauplin in 3734
- List repo files repoType by Wauplin in 3753
- Update hardware list in SpaceHardware enum by lhoestq in 3756
- Use HF_HUB_DOWNLOAD_TIMEOUT as default httpx timeout by Wauplin in 3751
- Default _endpoint to None in CommitInfo by tomaarsen in 3737
- Update MAX_FILE_SIZE_GB from 50 to 200 to match hub-docs PR 2169 by davanstrien in 3696
- Pass kwargs to post init in dataclasses by zucchini-nlp in 3771 
- Add retry/backoff when fetching Xet connection info to handle 502 errors by aabhathanki in 3768 

📖 Documentation

- Wildcard pattern documentation by hanouticelina in 3710
- Add link to Hub Jobs documentation by gary149 in 3712
- Update HTTP backend configuration link to main branch by IliasAarab in 3713
- Correct img tag style in README.md by sadnesslovefreedom-debug in 3689


🐛 Bug and typo fixes

- Fix endpoint not forwarded in CommitUrl by Wauplin in 3679
- fix curlify with streaming request by hanouticelina in 3692
- Fix severe performance regression in streaming by keeping a byte iterator in HfFileSystemStreamFile by leq6c in 3685
- fix resolve_path() with special char  by lhoestq in 3704
- Fix cache verify incorrectly reporting folders as missing files by Mitix-EPI in 3707
- Fix multi user cache lock permissions by hanouticelina in 3714
- [CLI] Fix typo in CLI error handling by hanouticelina in 3757
- Log 'x-amz-cf-id' on http error (if no request id) by Wauplin in 3759
- [Fix] Filter datasets by benchmark official by Wauplin in 3761

🏗️ Internal

- Ignore unused-ignore-comment warnings in ty for mypy compatibility by hanouticelina in 3691
- Skip sync test on Windows Python 3.14 by Wauplin in 3700
- Upgrade GitHub Actions to latest versions by salmanmkc in 3729
- Remove canonical dataset test case from test_access_repositories_lists by hanouticelina in 3740
- Fix style issues in CI by Wauplin in 3773 

Significant community contributions

The following contributors have made significant changes to the library over the last release:

* tomsun28
 * feat: zai-org provider supports text to image (3675)
* leq6c
 * Fix severe performance regression in streaming by keeping a byte iterator in HfFileSystemStreamFile (3685)
* Mitix-EPI
 * Fix cache verify incorrectly reporting folders as missing files (3707)
* Praful932
 * Pass namespace parameter to fetch job logs in jobs CLI (3736)
* aabhathanki
 * Add retry/backoff when fetching Xet connection info to handle 502 errors (3768)

1.3.7

Log 'x-amz-cf-id' on http error (if no request id) (https://github.com/huggingface/huggingface_hub/pull/3759)

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v1.3.5...v1.3.7

1.3.5

- Use HF_HUB_DOWNLOAD_TIMEOUT as default httpx timeout by Wauplin in 3751

Default timeout is 10s. This is ok in most use cases but can trigger errors in CIs making a lot of requests to the Hub. Solution is to set `HF_HUB_DOWNLOAD_TIMEOUT=60` as environment variable in these cases.

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v1.3.4...v1.3.5

1.3.4

- Default _endpoint to None in CommitInfo, fixes tiny regression from v1.3.3 by tomaarsen in https://github.com/huggingface/huggingface_hub/pull/3737

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v1.3.3...v1.3.4

1.3.3

⚙️ List Jobs Hardware

You can now list all available hardware options for Hugging Face Jobs, both from the CLI and programmatically.

From the CLI:
console
➜ hf jobs hardware                           
NAME            PRETTY NAME            CPU      RAM     ACCELERATOR      COST/MIN COST/HOUR 
--------------- ---------------------- -------- ------- ---------------- -------- --------- 
cpu-basic       CPU Basic              2 vCPU   16 GB   N/A              $0.0002  $0.01     
cpu-upgrade     CPU Upgrade            8 vCPU   32 GB   N/A              $0.0005  $0.03     
cpu-performance CPU Performance        8 vCPU   32 GB   N/A              $0.0000  $0.00     
cpu-xl          CPU XL                 16 vCPU  124 GB  N/A              $0.0000  $0.00     
t4-small        Nvidia T4 - small      4 vCPU   15 GB   1x T4 (16 GB)    $0.0067  $0.40     
t4-medium       Nvidia T4 - medium     8 vCPU   30 GB   1x T4 (16 GB)    $0.0100  $0.60     
a10g-small      Nvidia A10G - small    4 vCPU   15 GB   1x A10G (24 GB)  $0.0167  $1.00     
a10g-large      Nvidia A10G - large    12 vCPU  46 GB   1x A10G (24 GB)  $0.0250  $1.50     
a10g-largex2    2x Nvidia A10G - large 24 vCPU  92 GB   2x A10G (48 GB)  $0.0500  $3.00     
a10g-largex4    4x Nvidia A10G - large 48 vCPU  184 GB  4x A10G (96 GB)  $0.0833  $5.00     
a100-large      Nvidia A100 - large    12 vCPU  142 GB  1x A100 (80 GB)  $0.0417  $2.50     
a100x4          4x Nvidia A100         48 vCPU  568 GB  4x A100 (320 GB) $0.1667  $10.00    
a100x8          8x Nvidia A100         96 vCPU  1136 GB 8x A100 (640 GB) $0.3333  $20.00    
l4x1            1x Nvidia L4           8 vCPU   30 GB   1x L4 (24 GB)    $0.0133  $0.80     
l4x4            4x Nvidia L4           48 vCPU  186 GB  4x L4 (96 GB)    $0.0633  $3.80     
l40sx1          1x Nvidia L40S         8 vCPU   62 GB   1x L40S (48 GB)  $0.0300  $1.80     
l40sx4          4x Nvidia L40S         48 vCPU  382 GB  4x L40S (192 GB) $0.1383  $8.30     
l40sx8          8x Nvidia L40S         192 vCPU 1534 GB 8x L40S (384 GB) $0.3917  $23.50 


Programmatically:
python
>>> from huggingface_hub import HfApi
>>> api = HfApi()
>>> hardware_list = api.list_jobs_hardware()
>>> hardware_list[0]
JobHardware(name='cpu-basic', pretty_name='CPU Basic', cpu='2 vCPU', ram='16 GB', accelerator=None, unit_cost_micro_usd=167, unit_cost_usd=0.000167, unit_label='minute')
>>> hardware_list[0].name
'cpu-basic'

- [Jobs] List available hardware in 3693 by Wauplin

🐛 Bug Fixes

- Fix severe performance regression in streaming by keeping a byte iterator in `HfFileSystemStreamFile` in 3685 by leq6c
- Fix verify incorrectly reporting folders as missing files in 3707 by Mitix-EPI
- Fix `resolve_path(`) with special char  in 3704 by lhoestq
- Fix curlify with streaming request in 3692 by hanouticelina

✨ Various Improvements

- Add `num_papers` field to Organization class in 3695 by cfahlgren1
- Add `limit` param to `list_papers` API method in 3697 by Wauplin
- Add repo commit count warning when exceeding recommended limits in 3698 by Wauplin
- Update `MAX_FILE_SIZE_GB` from 50 to 200 GB in 3696 by davanstrien

📚 Documentation

- Wildcard pattern documentation in 3710 by hanouticelina

1.3.2

- Fix endpoint not forwarded in CommitUrl 3679 by Wauplin 
- feat: zai-org provider supports text to image 3675 by tomsun28

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v1.3.1...v1.3.2

1.3.1

- Add `dimensions` & `encoding_format` parameter to InferenceClient for output embedding size 3671 by mishig25 


**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v1.3.0...v1.3.1

1.3.0

🖥️ CLI: `hf models`, `hf datasets`, `hf spaces` Commands

The CLI has been reorganized with dedicated commands for Hub discovery, while `hf repo` stays focused on managing your own repositories.

**New commands:**

console
Models
hf models ls --author=Qwen --limit=10
hf models info Qwen/Qwen-Image-2512

Datasets
hf datasets ls --filter "format:parquet" --sort=downloads
hf datasets info HuggingFaceFW/fineweb

Spaces
hf spaces ls --search "3d"
hf spaces info enzostvs/deepsite


This organization mirrors the Python API (`list_models`, `model_info`, etc.), keeps the `hf <resource> <action>` pattern, and is extensible for future commands like `hf papers` or `hf collections`.

- [CLI] Add `hf models`/`hf datasets`/`hf spaces` commands by hanouticelina in 3669

🔧 Transformers CLI Installer

You can now install the `transformers` CLI alongside the `huggingface_hub` CLI using the standalone installer scripts.

bash
Install hf CLI only (default)
curl -LsSf https://hf.co/cli/install.sh | bash -s

Install both hf and transformers CLIs
curl -LsSf https://hf.co/cli/install.sh | bash -s -- --with-transformers


powershell
Install hf CLI only (default)
powershell -c "irm https://hf.co/cli/install.ps1 | iex"

Install both hf and transformers CLIs
powershell -c "irm https://hf.co/cli/install.ps1 | iex" -WithTransformers


Once installed, you can use the `transformers` CLI directly:

bash
transformers serve
transformers chat openai/gpt-oss-120b


- Add transformers CLI installer by Wauplin in 3666

📊 Jobs Monitoring

New `hf jobs stats` command to monitor your running jobs in real-time, similar to `docker stats`. It displays a live table with CPU, memory, network, and GPU usage.

bash
>>> hf jobs stats
JOB ID                   CPU % NUM CPU MEM % MEM USAGE      NET I/O         GPU UTIL % GPU MEM % GPU MEM USAGE
------------------------ ----- ------- ----- -------------- --------------- ---------- --------- ---------------
6953ff6274100871415c13fd 0%    3.5     0.01% 1.3MB / 15.0GB 0.0bps / 0.0bps 0%         0.0%      0.0B / 22.8GB


A new `HfApi.fetch_jobs_metrics()` method is also available:

python
>>> for metrics in fetch_job_metrics(job_id="6953ff6274100871415c13fd"):
...     print(metrics)
{
 "cpu_usage_pct": 0,
 "cpu_millicores": 3500,
 "memory_used_bytes": 1306624,
 "memory_total_bytes": 15032385536,
 "rx_bps": 0,
 "tx_bps": 0,
 "gpus": {
     "882fa930": {
         "utilization": 0,
         "memory_used_bytes": 0,
         "memory_total_bytes": 22836000000
     }
 },
 "replica": "57vr7"
}


- [Jobs] Monitor cpu, memory, network and gpu (if any) by lhoestq in 3655

💔 Breaking Change

The `direction` parameter in `list_models`, `list_datasets`, and `list_spaces` is now deprecated and not used. The sorting is always descending. 

- [HfApi] deprecate `direction` in list repos methods by hanouticelina in 3630

🔧 Other QoL Improvements

- [Jobs][CLI] allow unknown options in jobs cli by lhoestq in 3614
- [UV Jobs] Pass local script as env variable by Wauplin in 3616
- [CLI] hf repo info + add --expand parameter by Wauplin in 3664
- log a message when HF_TOKEN is set in auth list by hanouticelina in 3608
- Support 'x | y' syntax in strict dataclasses by Wauplin in 3668
- feat: use http_backoff for LFS batch/verify/completion endpoints by The-Obstacle-Is-The-Way in 3622
- Support local folders safetensors metadata by vrdn-23 in 3623
- Add dataclass_transform decorator to dataclass_with_extra by charliermarsh in 3639
- Update papers model by Samoed in 3586

📖 Documentation

- doc fix by jzhang533 in 3597
- Fix a url in the docs by neo in 3606
- Add Job Timeout section to CLI docs by davanstrien in 3665

🛠️ Small fixes and maintenance

🐛 Bug and typo fixes

- Fix unbound local error when reading corrupted metadata files by Wauplin in 3610
- [CLI] Fix private should default to None, not False by Wauplin in 3618
- Fix `create_repo` returning wrong `repo_id` by hanouticelina in 3634
- Fix: Use self.endpoint in job-related APIs for custom endpoint support by PredictiveManish in 3653
- Fix hf-xet version mismatch by Tanishq1030 in 3662

🏗️ Internal

- Prepare for v1.3 by Wauplin in 3599
- [Internal] Fix quality by hanouticelina in 3607
- [CI] Fix warn on warning tests by Wauplin in 3617
- Try update bot settings by Wauplin in 3624
- trigger sentence-transformers CI for hfh prerelease by hanouticelina in 3626
- fix by hanouticelina in 3627
- remove unnecessary test by hanouticelina in 3631
- [Bot] Update inference types by HuggingFaceInfra in 3520
- Fix ty in CI by Wauplin in 3661
- Upgrade GitHub Actions for Node 24 compatibility by salmanmkc in 3637
- Upgrade GitHub Actions to latest versions by salmanmkc in 3638
- Remove fastai integration tests by hanouticelina in 3670

Significant community contributions

The following contributors have made significant changes to the library over the last release:

* jzhang533
 * doc fix (3597)
* akshatvishu
 * [CLI] Add 'hf repo list' command  (3611)
* The-Obstacle-Is-The-Way
 * feat: use http_backoff for LFS batch/verify/completion endpoints (3622)
* salmanmkc
 * Upgrade GitHub Actions for Node 24 compatibility (3637)
 * Upgrade GitHub Actions to latest versions (3638)
* Samoed
 * Update papers model (3586)
* vrdn-23
 * Support local folders safetensors metadata (3623)

1.2.4

- Fix `create_repo` returning wrong repo_id by hanouticelina in 3634
- Fix: Use `self.endpoint` in job-related APIs for custom endpoint support by PredictiveManish in 3653
- Fix: `hf-xet` Requirements missmatch by 0Falli0 in 3239
- Add `dataclass_transform` decorator to `dataclass_with_extra` by charliermarsh in 3639


**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v1.2.3...v1.2.4

1.2.3

Patch release for 3618 by Wauplin.

> When creating a new repo, we should default to private=None instead of private=False. This is already the case when using the API but not when using the CLI. This is a bug likely introduced when switching to Typer. When defaulting to None, the repo visibility will default to False except if the organization has configured repos to be "private by default" (the check happens server-side, so it shouldn't be hardcoded client-side).

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v1.2.2...v1.2.3

1.2.2

- Fix unbound local error when reading corrupted metadata files by Wauplin in 3610
- Fix auth_list not showing HF_TOKEN message when no stored tokens exist by hanouticelina in 3608

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v1.2.1...v1.2.2

1.2.0

🚦 Smarter Rate Limit Handling

We've improved how the `huggingface_hub` library handles rate limits from the Hub. When you hit a rate limit, you'll now see clear, actionable error messages telling you exactly how long to wait and how many requests you have left.

console
HfHubHTTPError: 429 Too Many Requests for url: https://huggingface.co/api/models/username/reponame.
Retry after 55 seconds (0/2500 requests remaining in current 300s window).


When a 429 error occurs, the SDK automatically parses the `RateLimit` header to extract the exact number of seconds until the rate limit resets, then waits precisely that duration before retrying. This applies to file downloads (i.e. Resolvers), uploads, and paginated Hub API calls (`list_models`, `list_datasets`, `list_spaces`, etc.).

More info about Hub rate limits in the docs 👉 [here](https://huggingface.co/docs/hub/rate-limits#hub-rate-limits).

> - Parse rate limit headers for better 429 error messages by hanouticelina in 3570
> - Use rate limit headers for smarter retry in http backoff by hanouticelina in 3577
> - Harmonize retry behavior for metadata fetch and `HfFileSystem` by hanouticelina in 3583
> - Add retry for preupload endpoint by hanouticelina in 3588
> - Use default retry values in pagination by hanouticelina in 3587


✨ HF API

**Daily Papers endpoint**: You can now programmatically access Hugging Face's daily papers feed. You can filter by week, month, or submitter, and sort by publication date or trending.

python
from huggingface_hub import list_daily_papers

for paper in list_daily_papers(date="2025-12-03"):
 print(paper.title)

DeepSeek-V3.2: Pushing the Frontier of Open Large Language Models
ToolOrchestra: Elevating Intelligence via Efficient Model and Tool Orchestration
MultiShotMaster: A Controllable Multi-Shot Video Generation Framework
Deep Research: A Systematic Survey
MG-Nav: Dual-Scale Visual Navigation via Sparse Spatial Memory
...


> Add daily papers endpoint by BastienGimbert in 3502
> Add more parameters to daily papers by Samoed in 3585

**Offline mode helper**: we recommend using `huggingface_hub.is_offline_mode()` to check whether offline mode is enabled instead of checking `HF_HUB_OFFLINE` directly.
> Add `offline_mode` helper by Wauplin in 3593 
> Rename utility to `is_offline_mode` by Wauplin 3598


**Inference Endpoints:** You can now configure scaling metrics and thresholds when deploying endpoints.

> feat(endpoints): scaling metric and threshold by oOraph in 3525

**Exposed utilities**: `RepoFile` and `RepoFolder` are now available at the root level for easier imports.

> Expose `RepoFile` and `RepoFolder` at root level by Wauplin in 3564


⚡️ Inference Providers

[OVHcloud AI Endpoints](https://www.ovhcloud.com/en/public-cloud/ai-endpoints/catalog/) was added as an official Inference Provider in `v1.1.5`. OVHcloud provides European-hosted, GDPR-compliant model serving for your AI applications.

python
import os
from huggingface_hub import InferenceClient

client = InferenceClient(
 api_key=os.environ["HF_TOKEN"],
)

completion = client.chat.completions.create(
 model="openai/gpt-oss-20b:ovhcloud",
 messages=[
     {
         "role": "user",
         "content": "What is the capital of France?"
     }
 ],
)

print(completion.choices[0].message)


> Add OVHcloud AI Endpoints as an Inference Provider by eliasto in 3541


We also added support for **automatic speech recognition (ASR)** with Replicate, so you can now transcribe audio files easily.

python
import os
from huggingface_hub import InferenceClient

client = InferenceClient(
 provider="replicate",
 api_key=os.environ["HF_TOKEN"],
)

output = client.automatic_speech_recognition("sample1.flac", model="openai/whisper-large-v3")


> [Inference Providers] Add support for ASR with Replicate by hanouticelina in 3538

The `truncation_direction` parameter in `InferenceClient.feature_extraction` ( (and its async counterpart) now uses lowercase values ("left"/"right" instead of "Left"/"Right") for consistency with other specs. The Async counterpart has been updated as well
> [Inference] Use lowercase left/right truncation direction parameter by Wauplin in 3548


📁 HfFileSystem

**HfFileSystem**: A new top-level `hffs` alias make working with the filesystem interface more convenient.

python
>>> from huggingface_hub import hffs
>>> with hffs.open("datasets/fka/awesome-chatgpt-prompts/prompts.csv", "r") as f:
...     print(f.readline())
"act","prompt"
"An Ethereum Developer","Imagine you are an experienced Ethereum developer tasked..."


> [HfFileSystem] Add top level hffs by lhoestq in 3556
> [HfFileSystem] Add expand_info arg by lhoestq in 3575 

💔 Breaking Change
Paginated results when listing user access requests: `list_pending_access_requests`, `list_accepted_access_requests`, and `list_rejected_access_requests` now return an iterator instead of a list. This allows lazy loading of results for repositories with a large number of access requests. If you need a list, wrap the call with `list(...)`.

> Paginated results in `list_user_access` by Wauplin in 3535


🔧 Other QoL Improvements

- Better default for `num_workers` by Qubitium in 3532
- Avoid redundant call to the Xet connection info URL by Wauplin in 3534
- Pass through additional arguments from `HfApi` download utils by schmrlng in 3531
- Add optional cache to `whoami` by Wauplin in 3568
- Enhance `repo_type_and_id_from_hf_id` by pulltheflower in 3507
- Warn on server warning 'X-HF-Warning' by Wauplin in 3589
- Just print server warning without hardcoded client-side addition by Wauplin in 3592
- Decrease number of files before falling back to `list_repo_tree` in `snapshot_download` by hanouticelina in 3565

📖 Documentation

- [Docs] Update CLI installation guide by hanouticelina in 3536
- Fix: correct `hf login` example to `hf auth login` by alisheryeginbay in 3590

🛠️ Small fixes and maintenance

🐛 Bug and typo fixes

- [Inference] Fix zero shot classification output parsing by hanouticelina in 3561
- Fix `FileNotFoundError` in CLI update check by hanouticelina in 3574
- Fix `HfHubHTTPError` reduce error by adding factory function by owenowenisme in 3579
- Make 'name' optional in catalog deploy by Wauplin in 3529
- Do not use rich in tiny-agents CLI by Wauplin in 3573
- use `constants.HF_HUB_ETAG_TIMEOUT` as timeout for `get_hf_file_metadata` by krrome in 3595

🏗️ Internal

- Add `huggingface_hub` as dependency for hf by Wauplin in 3527
- Prepare for 1.2 release by hanouticelina in 3528
- [Internal] Fix CI by hanouticelina in 3544
- Fix test_list_spaces_linked in CI by Wauplin in 3549
- Fix minor things in CI by Wauplin in 3558
- [Internal] Fix quality by hanouticelina in 3572
- Fix quality by hanouticelina in 3584


Significant community contributions

The following contributors have made significant changes to the library over the last release:

* schmrlng
 * Pass through additional arguments from `HfApi` download utils (3531)
* eliasto
 * Add OVHcloud AI Endpoints as an Inference Provder (3541)
* Boulaouaney
 * Add uv support to installation scripts for faster package installation (3486)
* pulltheflower
 * Enhance repo_type_and_id_from_hf_id of hf_api (3507)
* owenowenisme
 * Fix HfHubHTTPError reduce error by adding factory function (3579)
* BastienGimbert
 * Add daily papers endpoint (3502)
* Samoed
 * Add more parameters to daily papers (3585)

1.1.7

[HfFileSystem] Add top level hffs by lhoestq  3556.

Example:

python
>>> from huggingface_hub import hffs
>>> with hffs.open("datasets/fka/awesome-chatgpt-prompts/prompts.csv", "r") as f:
...     print(f.readline())
...     print(f.readline())
"act","prompt"
"An Ethereum Developer","Imagine you are an experienced Ethereum developer tasked..."


**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v1.1.6...v1.1.7

1.1.6

This release includes multiple bug fixes:

- decrease number of files before falling back to `list_repo_tree` in `snapshot_download` 3565 by hanouticelina 
- Fix `HfHubHTTPError` reduce error by adding factory function 3579  by owenowenisme
- Fix `FileNotFoundError` in CLI update check 3574 by hanouticelina 
- Do not use rich in `tiny-agents` CLI 3573 by Wauplin 

---

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v1.1.5...v1.1.6

1.1.5

⚡️ New Inference Provider: OVHcloud AI Endpoints

[OVHcloud AI Endpoints](https://www.ovhcloud.com/en/public-cloud/ai-endpoints/catalog/) is now an official [Inference Provider](https://huggingface.co/docs/inference-providers/en/index) on Hugging Face! 🎉 
OVHcloud delivers fast, production ready inference on secure, sovereign, fully 🇪🇺 European infrastructure - combining advanced features with competitive pricing.

python
import os
from huggingface_hub import InferenceClient

client = InferenceClient(
 api_key=os.environ["HF_TOKEN"],
)

completion = client.chat.completions.create(
 model="openai/gpt-oss-20b:ovhcloud",
 messages=[
     {
         "role": "user",
         "content": "What is the capital of France?"
     }
 ],
)

print(completion.choices[0].message)

More snippets examples in the provider documentation 👉 [here](https://huggingface.co/docs/inference-providers/en/providers/ovhcloud).

- Add OVHcloud AI Endpoints as an Inference Provder in 3541 by eliasto

QoL Improvements 

Installing the CLI is now much faster, thanks to Boulaouaney for adding support for `uv`, bringing faster package installation.

- Add uv support to installation scripts for faster package installation in 3486 by Boulaouaney

Bug Fixes

This release also includes the following bug fixes:
- [Collections] Add collections to collections by slug id in 3551 by hanouticelina 
- [CLI] Respect `HF_DEBUG` environment variable in 3562 by hanouticelina 
- [Inference] fix zero shot classification output parsing in 3561 by hanouticelina

1.1.4

- Paginated results in list_user_access by Wauplin in https://github.com/huggingface/huggingface_hub/pull/3535
:warning: This patch release is a breaking chance but necessary to reflect API update made server-side.

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v1.1.3...v1.1.4

1.1.3

- Make 'name' optional in catalog deploy by Wauplin  in  https://github.com/huggingface/huggingface_hub/pull/3529
- Pass through additional arguments from HfApi download utils by schmrlng in https://github.com/huggingface/huggingface_hub/pull/3531
- Avoid redundant call to the Xet connection info URL by Wauplin in https://github.com/huggingface/huggingface_hub/pull/3534
- => this PR fixes HTTP 429 rate limit issues happening when downloading a very large dataset of small files

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v1.1.0...v1.1.3

1.1.0

🚀 Optimized Download Experience

⚡ This release significantly improves the file download experience by making it faster and cleaning up the terminal output.

`snapshot_download` is now **always multi-threaded**, leading to significant performance gains. We removed a previous limitation, as Xet's internal resource management ensures we can parallelize downloads safely without resource contention. A sample benchmark showed this made the download much faster!

Additionally, the output for `snapshot_download` and `hf download` CLI is now much less verbose. Per file logs are hidden by default, and all individual progress bars are combined into a single progress bar, resulting in a much cleaner output.


![download_2](https://github.com/user-attachments/assets/1546cbee-64c8-48ff-8304-f48e9bc91446)



* Multi-threaded snapshot download  by Wauplin in 3522
* Compact output in `snapshot_download` and `hf download`  by Wauplin in 3523

Inference Providers

🆕 [WaveSpeedAI](https://wavespeed.ai) is now an official [Inference Provider](https://huggingface.co/docs/inference-providers/en/index) on Hugging Face! 🎉 [WaveSpeedAI](https://wavespeed.ai/) provides fast, scalable, and cost-effective model serving for creative AI applications, supporting `text-to-image`, `image-to-image`, `text-to-video`, and `image-to-video` tasks. 🎨 


python
import os
from huggingface_hub import InferenceClient

client = InferenceClient(
 provider="wavespeed",
 api_key=os.environ["HF_TOKEN"],
)

video = client.text_to_video(
 "A cat riding a bike",
 model="Wan-AI/Wan2.2-TI2V-5B",
)

More snippets examples in the provider documentation 👉 [here](https://huggingface.co/docs/inference-providers/en/providers/wavespeed). 


We also added support for `image-segmentation` task for [fal](https://huggingface.co/docs/inference-providers/en/providers/fal-ai), enabling state-of-the-art background removal with [RMBG v2.0](https://huggingface.co/briaai/RMBG-2.0).
python
import os
from huggingface_hub import InferenceClient

client = InferenceClient(
 provider="fal-ai",
 api_key=os.environ["HF_TOKEN"],
)

output = client.image_segmentation("cats.jpg", model="briaai/RMBG-2.0")

![MixCollage-05-Nov-2025-11-49-AM-7835](https://github.com/user-attachments/assets/f5dc88a8-f242-4e13-9d76-1d94a718fa18)


* [inference provider] Add wavespeed.ai as an inference provider  by arabot777 in 3474
* [Inference Providers] implement `image-segmentation` for fal  by hanouticelina in 3521

🦾 CLI continues to get even better!

Following the complete revamp of the Hugging Face CLI in `v1.0`, this release builds on that foundation by adding powerful new features and improving accessibility.

New `hf` PyPI Package

To make the CLI even easier to access, we've published a new, minimal PyPI package: `hf`. This package installs the `hf` CLI tool and It's perfect for quick, isolated execution with modern tools like [uvx](https://docs.astral.sh/uv/guides/tools/).

bash
Run the CLI without installing it
> uvx hf auth whoami


⚠️ Note: This package is for the CLI only. Attempting to `import hf` in a Python script will correctly raise an `ImportError`.

A big thank you to thorwhalen for generously transferring the `hf` package name to us on PyPI. This will make the CLI much more accessible for all Hugging Face users. 🤗 

* Upload `hf` CLI to PyPI  by Wauplin in 3511

Manage Inference Endpoints

A new command group, `hf endpoints`, has been added to deploy and manage your [Inference Endpoints](https://endpoints.huggingface.co) directly from the terminal.

This provides "one-liners" for deploying, deleting, updating, and monitoring endpoints. The CLI offers two clear paths for deployment: `hf endpoints deploy` for standard Hub models and `hf endpoints catalog deploy` for optimized Model Catalog configurations.

console
> hf endpoints --help
Usage: hf endpoints [OPTIONS] COMMAND [ARGS]...

Manage Hugging Face Inference Endpoints.

Options:
--help  Show this message and exit.

Commands:
catalog        Interact with the Inference Endpoints catalog.
delete         Delete an Inference Endpoint permanently.
deploy         Deploy an Inference Endpoint from a Hub repository.
describe       Get information about an existing endpoint.
ls             Lists all Inference Endpoints for the given namespace.
pause          Pause an Inference Endpoint.
resume         Resume an Inference Endpoint.
scale-to-zero  Scale an Inference Endpoint to zero.
update         Update an existing endpoint.


* [CLI] Add Inference Endpoints Commands  by hanouticelina in 3428

Verify Cache Integrity

A new command, `hf cache verify`, has been added to check your cached files against their checksums on the Hub. This is a great tool to ensure your local cache is not corrupted and is in sync with the remote repository.

console
> hf cache verify --help
Usage: hf cache verify [OPTIONS] REPO_ID

Verify checksums for a single repo revision from cache or a local directory.

Examples:
- Verify main revision in cache: `hf cache verify gpt2`
- Verify specific revision: `hf cache verify gpt2 --revision refs/pr/1`
- Verify dataset: `hf cache verify karpathy/fineweb-edu-100b-shuffle --repo-type dataset`
- Verify local dir: `hf cache verify deepseek-ai/DeepSeek-OCR --local-dir /path/to/repo`

Arguments:
REPO_ID  The ID of the repo (e.g. `username/repo-name`).  [required]

Options:
--repo-type [model|dataset|space]
                               The type of repository (model, dataset, or
                               space).  [default: model]
--revision TEXT                 Git revision id which can be a branch name,
                               a tag, or a commit hash.
--cache-dir TEXT                Cache directory to use when verifying files
                               from cache (defaults to Hugging Face cache).
--local-dir TEXT                If set, verify files under this directory
                               instead of the cache.
--fail-on-missing-files         Fail if some files exist on the remote but
                               are missing locally.
--fail-on-extra-files           Fail if some files exist locally but are not
                               present on the remote revision.
--token TEXT                    A User Access Token generated from
                               https://huggingface.co/settings/tokens.
--help                          Show this message and exit.


* [CLI] Add `hf cache verify`  by hanouticelina in 3461


Cache Sorting and Limiting
Managing your local cache is now easier. The `hf cache ls` command has been enhanced with two new options:

- `--sort`: Sort your cache by `accessed`, `modified`, `name`, or `size`. You can also specify order (e.g., `modified:asc` to find the oldest files).
- `--limit`: Get just the top N results after sorting (e.g., `--limit 10`).

console
List top 10 most recently accessed repos
> hf cache ls --sort accessed --limit 10

Find the 5 largest repos you haven't used in over a year
> hf cache ls --filter "accessed>1y" --sort size --limit 5


* Add sort and limit parameters in hf cache ls  by Wauplin in 3510


Finally, we've patched the CLI installer script to fix a bug for `zsh` users. The installer now works correctly across all common shells.
* Use hf installer with bash  by Wauplin in 3498
* make installer work for zsh  by hanouticelina in 3513

🔧 Other
We've fixed a bug in `HfFileSystem` where the instance cache would break when using multiprocessing with the "fork" start method.

* [HfFileSystem] improve cache for multiprocessing fork and multithreading  by lhoestq in 3500


🌍 Documentation
Thanks to BastienGimbert for translating the README to French 🇫🇷 🤗 
* i18n: add French README translation  by BastienGimbert in 3490

and Thanks to didier-durand for fixing multiple language typos in the library! 🤗 

* [Doc]: fix various typos in different files  by didier-durand in 3499
* [Doc]: fix various typos in different files  by didier-durand in 3509
* [Doc]: fix various typos in different files  by didier-durand in 3514
* [Doc]: fix various typos in different files  by didier-durand in 3517
* [Doc]: fix various typos in different files  by didier-durand in 3497



🛠️ Small fixes and maintenance

🐛 Bug and typo fixes

* Close HTTP sessions on fork  by Wauplin in 3508
* Fix some outdated docs  by Wauplin in 3495
🏗️ internal

* Remove aiohttp dependency  by Wauplin in 3488
* Prepare for 1.1.0  by Wauplin in 3489
* Fix type annotations in inference codegen  by Wauplin in 3496
* Add CI + official support for Python 3.14  by Wauplin in 3483
* [Internal] Fix quality issue generated from `update-inference-types` workflow  by hanouticelina in 3516


Significant community contributions

The following contributors have made significant changes to the library over the last release:

* arabot777
 * [inference provider] Add wavespeed.ai as an inference provider (3474)
* BastienGimbert
 * i18n: add French README translation (3490)
* didier-durand
 * [Doc]: fix various typos in different files (3497)
 * [Doc]: fix various typos in different files (3499)
 * [Doc]: fix various typos in different files (3509)
 * [Doc]: fix various typos in different files (3514)
 * [Doc]: fix various typos in different files (3517)

1.0.1

In `huggingface_hub` v1.0 release, we've removed our dependency on `aiohttp` to replace it with `httpx` but we forgot to remove it from the `huggingface_hub[inference]` extra dependencies in `setup.py`. This patch release removes it, making the `inference` extra removed as well. 

The internal method `_import_aiohttp` being unused, it has been removed as well. 

- Remove aiohttp dependency by Wauplin in  3488

**Full Changelog**: https://github.com/huggingface/huggingface_hub/compare/v1.0.0...v1.0.1

1.0

You are about to delete tag v1.0 on model Wauplin/my-cool-model
Proceed? [Y/n] y
Tag v1.0 deleted on Wauplin/my-cool-model


For more details, check out the [CLI guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/cli#huggingface-cli-tag).

* CLI Tag Functionality   by bilgehanertan in 2172

🧩 ModelHubMixin

This `ModelHubMixin` got a set of nice improvement to generate model cards and handle custom data types in the `config.json` file. More info in the [integration guide](https://huggingface.co/docs/huggingface_hub/main/en/guides/integrations#advanced-usage). 

* `ModelHubMixin`: more metadata + arbitrary config types + proper guide  by Wauplin in 2230
* Fix ModelHubMixin when class is a dataclass  by Wauplin in 2159
* Do not document private attributes of ModelHubMixin  by Wauplin in 2216
* Add support for pipeline_tag in ModelHubMixin  by Wauplin in 2228

⚙️ Other

In a shared environment, it is now possible to set a custom path `HF_TOKEN_PATH` as environment variable so that each user of the cluster has their own access token.

* Support `HF_TOKEN_PATH` as environment variable  by Wauplin in 2185

Thanks to Y4suyuki and lappemic, most custom errors defined in `huggingface_hub` are now aggregated in the same module. This makes it very easy to import them from `from huggingface_hub.errors import ...`.

* Define errors in errors.py  by Y4suyuki in 2170
* Define errors in errors file  by lappemic in 2202

Fixed `HFSummaryWriter` (class to seamlessly log tensorboard events to the Hub) to work with either `tensorboardX` or `torch.utils` implementation, depending on the user setup.

* Import SummaryWriter from either tensorboardX or torch.utils  by Wauplin in 2205

Speed to list files using `HfFileSystem` has been drastically improved, thanks to awgr. The values returned from the cache are not deep-copied anymore, which was unfortunately the part taking the most time in the process. If users want to modify values returned by `HfFileSystem`, they would need to copy them before-hand. This is expected to be a very limited drawback. 

* fix: performance of _ls_tree  by awgr in 2103

Progress bars in `huggingface_hub` got some flexibility!
It is now possible to provide a name to a tqdm bar (similar to `logging.getLogger`) and to enable/disable only some progress bars. More details in [this guide](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/utilities#configure-progress-bars).

py
>>> from huggingface_hub.utils import tqdm, disable_progress_bars
>>> disable_progress_bars("peft.foo")

No progress bars for `peft.boo.bar`
>>> for _ in tqdm(range(5), name="peft.foo.bar"):
...     pass

But for `peft` yes
>>> for _ in tqdm(range(5), name="peft"):
...     pass
100%|█████████████████| 5/5 [00:00<00:00, 117817.53it/s]


* Implement hierarchical progress bar control in huggingface_hub  by lappemic in 2217

💔 Breaking changes

`--local-dir-use-symlink` and `--resume-download`

As part of the download process revamp, some breaking changes have been introduced. However we believe that the benefits outweigh the change cost. Breaking changes include:
- a `.cache/huggingface/` folder is not present at the root of the local dir. It only contains file locks, metadata and partially downloaded files. If you need to, you can safely delete this folder without corrupting the data inside the root folder. However, you should expect a longer recovery time if you try to re-run your download command.
- `--local-dir-use-symlink` is not in used anymore and will be ignored. It is not possible anymore to symlinks your local dir with the cache directory. Thanks to the `.cache/huggingface/` folder, it shouldn't be needed anyway.
- `--resume-download` has been deprecated and will be ignored. Resuming failed downloads is now activated by default all the time. If you need to force a new download, use `--force-download`.

Inference Types

As part of 2237 (Grammar and Tools support), we've updated the return value from `InferenceClient.chat_completion` and `InferenceClient.text_generation` to match exactly TGI output. The attributes of the returned objects did not change but the classes definition themselves yes. Expect errors if you've previously had `from huggingface_hub import TextGenerationOutput` in your code. This is however not the common usage since those objects are already instantiated by `huggingface_hub` directly.

Expected breaking changes

Some other breaking changes were expected (and announced since 0.19.x):
- `list_files_info` is definitively removed in favor of `get_paths_info` and `list_repo_tree`
- `WebhookServer.run` is definitively removed in favor of `WebhookServer.launch`
- `api_endpoint` in ModelHubMixin `push_to_hub`'s method is definitively removed in favor of the `HF_ENDPOINT` environment variable

Check 2156 for more details.

Small fixes and maintenance

⚙️ CI optimization

⚙️ fixes
* Fix HF_ENDPOINT not handled correctly  by Wauplin in 2155
* Fix proxy if dynamic endpoint by Wauplin (direct commit on main)
* Update the note message when logging in to make it easier to understand and clearer  by lh0x00 in 2163
* Fix URL when uploading to proxy  by Wauplin in 2167
* Fix SafeTensorsInfo initialization  by Wauplin in 2190
* Doc cli download timeout  by zioalex in 2198
* Fix Typos in CONTRIBUTION.md and Formatting in README.md  by lappemic in 2201
* change default model card by Wauplin (direct commit on main)
* Add returns documentation for save_pretrained  by alexander-soare in 2226
* Update cli.md  by QuinnPiers in 2242
* add warning tip that list_deployed_models only searches over cache  by MoritzLaurer in 2241
* Respect default timeouts in `hf_file_system`  by Wauplin in 2253
* Update harmonized token param desc and type def  by lappemic in 2252
* Better document download attribute  by Wauplin in 2250
* Correctly check inference endpoint is ready  by Wauplin in 2229
* Add support for `updatedRefs` in WebhookPayload  by Wauplin in 2169

⚙️ internal
* prepare for 0.23  by Wauplin in 2156
* lint by Wauplin (direct commit on main)
* quick fix by Wauplin (direct commit on main)
* Fix CI (inference tests, dataset viewer user, mypy)  by Wauplin in 2208
* link by Wauplin (direct commit on main)
* Fix circular imports in eager mode?  by Wauplin in 2211
* Drop generic from InferenceAPI framework list  by Wauplin in 2240
* Remove test sort by acsending likes  by Wauplin in 2243
* Delete legacy tests in `TestHfHubDownloadRelativePaths` + implicit delete folder is ok  by Wauplin in 2259
* small doc clarification  by julien-c  [2261](https://github.com/huggingface/huggingface_hub/pull/2261)

Significant community contributions

The following contributors have made significant changes to the library over the last release:

* lappemic
 * Fix Typos in CONTRIBUTION.md and Formatting in README.md ([2201](https://github.com/huggingface/huggingface_hub/pull/2201))
 * Define errors in errors file ([2202](https://github.com/huggingface/huggingface_hub/pull/2202))
 * [wip] Implement hierarchical progress bar control in huggingface_hub ([2217](https://github.com/huggingface/huggingface_hub/pull/2217))
 * Update harmonized token param desc and type def ([2252](https://github.com/huggingface/huggingface_hub/pull/2252))
* bilgehanertan
 * User API endpoints ([2147](https://github.com/huggingface/huggingface_hub/pull/2147))
 * CLI Tag Functionality  ([2172](https://github.com/huggingface/huggingface_hub/pull/2172))
* cjfghk5697
 * 🌐 [i18n-KO] Translated `guides/repository.md` to Korean  ([2124](https://github.com/huggingface/huggingface_hub/pull/2124))
 * 🌐 [i18n-KO] Translated `package_reference/inference_client.md` to Korean ([2178](https://github.com/huggingface/huggingface_hub/pull/2178))
 * 🌐 [i18n-KO] Translated `package_reference/utilities.md` to Korean ([2196](https://github.com/huggingface/huggingface_hub/pull/2196))
* SeungAhSon
 * 🌐 [i18n-KO] Translated `guides/model_cards.md` to Korean" ([2128](https://github.com/huggingface/huggingface_hub/pull/2128))
 *

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant