Skip to content

Commit aef6bd8

Browse files
Merge branch 'main' into verify-checksum-cli
2 parents a259ad1 + ff9b6e9 commit aef6bd8

25 files changed

+76
-70
lines changed

.github/workflows/python-tests.yml

Lines changed: 13 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,10 +21,10 @@ jobs:
2121
strategy:
2222
fail-fast: false
2323
matrix:
24-
python-version: ["3.9", "3.13"]
24+
python-version: ["3.9", "3.14"]
2525
test_name: ["Everything else", "Inference only", "Xet only"]
2626
include:
27-
- python-version: "3.13" # LFS not ran on 3.9
27+
- python-version: "3.14" # LFS not ran on 3.9
2828
test_name: "lfs"
2929
- python-version: "3.9"
3030
test_name: "fastai"
@@ -34,6 +34,8 @@ jobs:
3434
test_name: "Python 3.9, torch_1.11"
3535
- python-version: "3.12" # test torch latest on python 3.12 only.
3636
test_name: "torch_latest"
37+
- python-version: "3.13" # gradio not supported on 3.14 -> test it on 3.13
38+
test_name: "gradio"
3739
steps:
3840
- uses: actions/checkout@v2
3941
- name: Set up Python ${{ matrix.python-version }}
@@ -69,6 +71,10 @@ jobs:
6971
uv pip install "huggingface_hub[fastai] @ ."
7072
;;
7173
74+
gradio)
75+
uv pip install "huggingface_hub[gradio] @ ."
76+
;;
77+
7278
torch_latest)
7379
uv pip install "huggingface_hub[torch] @ ."
7480
uv pip install --upgrade torch
@@ -117,6 +123,10 @@ jobs:
117123
eval "$PYTEST ../tests/test_fastai*"
118124
;;
119125
126+
gradio)
127+
eval "$PYTEST ../tests/test_webhooks_server.py"
128+
;;
129+
120130
"Python 3.9, torch_1.11" | torch_latest)
121131
eval "$PYTEST ../tests/test_hub_mixin*"
122132
eval "$PYTEST ../tests/test_serialization.py"
@@ -148,7 +158,7 @@ jobs:
148158
strategy:
149159
fail-fast: false
150160
matrix:
151-
python-version: ["3.9", "3.11"]
161+
python-version: ["3.9", "3.14"]
152162
test_name: ["Everything else", "Xet only"]
153163

154164
steps:

docs/source/en/guides/cli.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ rendered properly in your Markdown viewer.
44

55
# Command Line Interface (CLI)
66

7-
The `huggingface_hub` Python package comes with a built-in CLI called `hf`. This tool allows you to interact with the Hugging Face Hub directly from a terminal. For example, you can login to your account, create a repository, upload and download files, etc. It also comes with handy features to configure your machine or manage your cache. In this guide, we will have a look at the main features of the CLI and how to use them.
7+
The `huggingface_hub` Python package comes with a built-in CLI called `hf`. This tool allows you to interact with the Hugging Face Hub directly from a terminal. For example, you can log in to your account, create a repository, upload and download files, etc. It also comes with handy features to configure your machine or manage your cache. In this guide, we will have a look at the main features of the CLI and how to use them.
88

99
> [!TIP]
1010
> This guide covers the most important features of the `hf` CLI.
@@ -174,7 +174,7 @@ hf download --help
174174

175175
### Download a single file
176176

177-
To download a single file from a repo, simply provide the repo_id and filename as follow:
177+
To download a single file from a repo, simply provide the repo_id and filename as follows:
178178

179179
```bash
180180
>>> hf download gpt2 config.json

docs/source/en/guides/download.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ Note that it is used internally by [`hf_hub_download`].
6868
## Download an entire repository
6969

7070
[`snapshot_download`] downloads an entire repository at a given revision. It uses internally [`hf_hub_download`] which
71-
means all downloaded files are also cached on your local disk. Downloads are made concurrently to speed-up the process.
71+
means all downloaded files are also cached on your local disk. Downloads are made concurrently to speed up the process.
7272

7373
To download a whole repository, just pass the `repo_id` and `repo_type`:
7474

docs/source/en/guides/jobs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -172,7 +172,7 @@ Running this will show the following output!
172172
This code ran with the following GPU: NVIDIA A10G
173173
```
174174

175-
Use this to run a fine tuning script like [trl/scripts/sft.py](https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py) with UV:
175+
Use this to run a fine-tuning script like [trl/scripts/sft.py](https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py) with UV:
176176

177177
```python
178178
>>> from huggingface_hub import run_uv_job

docs/source/en/guides/manage-spaces.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ In this guide, we will see how to manage your Space runtime
1010

1111
## A simple example: configure secrets and hardware.
1212

13-
Here is an end-to-end example to create and setup a Space on the Hub.
13+
Here is an end-to-end example to create and set up a Space on the Hub.
1414

1515
**1. Create a Space on the Hub.**
1616

docs/source/en/package_reference/environment_variables.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,7 @@ If `HF_HUB_OFFLINE=1` is set as environment variable and you call any method of
125125

126126
### HF_HUB_DISABLE_IMPLICIT_TOKEN
127127

128-
Authentication is not mandatory for every requests to the Hub. For instance, requesting
128+
Authentication is not mandatory for every request to the Hub. For instance, requesting
129129
details about `"gpt2"` model does not require to be authenticated. However, if a user is
130130
[logged in](../package_reference/login), the default behavior will be to always send the token
131131
in order to ease user experience (never get a HTTP 401 Unauthorized) when accessing private or gated repositories. For privacy, you can
@@ -138,7 +138,7 @@ would need to explicitly pass `token=True` argument in your script.
138138

139139
### HF_HUB_DISABLE_PROGRESS_BARS
140140

141-
For time consuming tasks, `huggingface_hub` displays a progress bar by default (using tqdm).
141+
For time-consuming tasks, `huggingface_hub` displays a progress bar by default (using tqdm).
142142
You can disable all the progress bars at once by setting `HF_HUB_DISABLE_PROGRESS_BARS=1`.
143143

144144
### HF_HUB_DISABLE_SYMLINKS_WARNING

setup.py

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,6 @@ def get_version() -> str:
6565
"urllib3<2.0", # VCR.py broken with urllib3 2.0 (see https://urllib3.readthedocs.io/en/stable/v2-migration-guide.html)
6666
"soundfile",
6767
"Pillow",
68-
"requests", # for gradio
6968
"numpy", # for embeddings
7069
"fastapi", # To build the documentation
7170
]
@@ -74,8 +73,10 @@ def get_version() -> str:
7473
if sys.version_info >= (3, 10):
7574
# We need gradio to test webhooks server
7675
# But gradio 5.0+ only supports python 3.10+ so we don't want to test earlier versions
77-
extras["testing"].append("gradio>=5.0.0")
78-
extras["testing"].append("requests") # see https://github.com/gradio-app/gradio/pull/11830
76+
extras["gradio"] = [
77+
"gradio>=5.0.0",
78+
"requests", # see https://github.com/gradio-app/gradio/pull/11830
79+
]
7980

8081
# Typing extra dependencies list is duplicated in `.pre-commit-config.yaml`
8182
# Please make sure to update the list there when adding a new typing dependency.
@@ -135,6 +136,7 @@ def get_version() -> str:
135136
"Programming Language :: Python :: 3.11",
136137
"Programming Language :: Python :: 3.12",
137138
"Programming Language :: Python :: 3.13",
139+
"Programming Language :: Python :: 3.14",
138140
"Topic :: Scientific/Engineering :: Artificial Intelligence",
139141
],
140142
include_package_data=True,

src/huggingface_hub/_inference_endpoints.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -368,7 +368,7 @@ def scale_to_zero(self) -> "InferenceEndpoint":
368368
"""Scale Inference Endpoint to zero.
369369
370370
An Inference Endpoint scaled to zero will not be charged. It will be resumed on the next request to it, with a
371-
cold start delay. This is different than pausing the Inference Endpoint with [`InferenceEndpoint.pause`], which
371+
cold start delay. This is different from pausing the Inference Endpoint with [`InferenceEndpoint.pause`], which
372372
would require a manual resume with [`InferenceEndpoint.resume`].
373373
374374
This is an alias for [`HfApi.scale_to_zero_inference_endpoint`]. The current object is mutated in place with the

src/huggingface_hub/_login.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,7 @@ def logout(token_name: Optional[str] = None) -> None:
123123
124124
Args:
125125
token_name (`str`, *optional*):
126-
Name of the access token to logout from. If `None`, will logout from all saved access tokens.
126+
Name of the access token to logout from. If `None`, will log out from all saved access tokens.
127127
Raises:
128128
[`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError):
129129
If the access token name is not found.

src/huggingface_hub/_space_api.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ class SpaceRuntime:
111111
Current hardware of the space. Example: "cpu-basic". Can be `None` if Space
112112
is `BUILDING` for the first time.
113113
requested_hardware (`str` or `None`):
114-
Requested hardware. Can be different than `hardware` especially if the request
114+
Requested hardware. Can be different from `hardware` especially if the request
115115
has just been made. Example: "t4-medium". Can be `None` if no hardware has
116116
been requested yet.
117117
sleep_time (`int` or `None`):

0 commit comments

Comments
 (0)