Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump the pip group across 9 directories with 15 updates #3

Merged
merged 1 commit into from
Nov 25, 2024

Conversation

dependabot[bot]
Copy link

@dependabot dependabot bot commented on behalf of github Nov 23, 2024

Bumps the pip group with 1 update in the /compression/gpt2 directory: transformers.
Bumps the pip group with 1 update in the /inference/huggingface/automatic-speech-recognition directory: transformers.
Bumps the pip group with 1 update in the /inference/huggingface/fill-mask directory: transformers.
Bumps the pip group with 1 update in the /inference/huggingface/text-generation directory: transformers.
Bumps the pip group with 1 update in the /inference/huggingface/text-generation/run-generation-script directory: transformers.
Bumps the pip group with 1 update in the /inference/huggingface/text2text-generation directory: transformers.
Bumps the pip group with 1 update in the /inference/huggingface/translation directory: transformers.
Bumps the pip group with 2 updates in the /training/HelloDeepSpeed directory: transformers and tqdm.
Bumps the pip group with 14 updates in the /training/MoQ/huggingface-transformers/examples/research_projects/lxmert directory:

Package From To
torch 2.2.0 2.5.1
numpy 1.22.0 2.1.3
tqdm 4.66.3 4.67.0
certifi 2024.7.4 2024.8.30
future 0.18.3 1.0.0
idna 3.7 3.10
joblib 1.2.0 1.4.2
jupyter-core 4.11.2 5.7.2
nbconvert 6.5.1 7.16.4
notebook 6.4.12 7.2.2
pillow 10.3.0 11.0.0
pyarrow 14.0.1 18.0.0
requests 2.32.2 2.32.3
urllib3 1.26.19 2.2.3

Updates transformers from 4.38.0 to 4.46.3

Release notes

Sourced from transformers's releases.

Patch release v4.46.3

One small fix for FSDP + gradient accumulation loss issue!

Patch release v4.46.2

Mostly had to finish the gradient accumulation ! Thanks to @​techkang and @​Ryukijano 🤗

Patch release v4.46.1

Patch release v4.4.61

This is mostly for fx and onnx issues!

** Fix regression loading dtype #34409 by @​SunMarc ** LLaVa: latency issues #34460 by @​zucchini-nlp ** Fix pix2struct #34374 by @​IlyasMoutawwakil ** Fix onnx non-exposable inplace aten op #34376 by @​IlyasMoutawwakil ** Fix torch.fx issue related to the new loss_kwargs keyword argument #34380 by @​michaelbenayoun

Release v4.46.0

New model additions

Moshi

The Moshi model was proposed in Moshi: a speech-text foundation model for real-time dialogue by Alexandre Défossez, Laurent Mazaré, Manu Orsini, Amélie Royer, Patrick Pérez, Hervé Jégou, Edouard Grave and Neil Zeghidour.

Moshi is a speech-text foundation model that casts spoken dialogue as speech-to-speech generation. Starting from a text language model backbone, Moshi generates speech as tokens from the residual quantizer of a neural audio codec, while modeling separately its own speech and that of the user into parallel streams. This allows for the removal of explicit speaker turns, and the modeling of arbitrary conversational dynamics. Moshi also predicts time-aligned text tokens as a prefix to audio tokens. This “Inner Monologue” method significantly improves the linguistic quality of generated speech and provides streaming speech recognition and text-to-speech. As a result, Moshi is the first real-time full-duplex spoken large language model, with a theoretical latency of 160ms, 200ms in practice.

image

Zamba

Zamba-7B-v1 is a hybrid between state-space models (Specifically Mamba) and transformer, and was trained using

... (truncated)

Commits

Updates transformers from 4.38.0 to 4.46.3

Release notes

Sourced from transformers's releases.

Patch release v4.46.3

One small fix for FSDP + gradient accumulation loss issue!

Patch release v4.46.2

Mostly had to finish the gradient accumulation ! Thanks to @​techkang and @​Ryukijano 🤗

Patch release v4.46.1

Patch release v4.4.61

This is mostly for fx and onnx issues!

** Fix regression loading dtype #34409 by @​SunMarc ** LLaVa: latency issues #34460 by @​zucchini-nlp ** Fix pix2struct #34374 by @​IlyasMoutawwakil ** Fix onnx non-exposable inplace aten op #34376 by @​IlyasMoutawwakil ** Fix torch.fx issue related to the new loss_kwargs keyword argument #34380 by @​michaelbenayoun

Release v4.46.0

New model additions

Moshi

The Moshi model was proposed in Moshi: a speech-text foundation model for real-time dialogue by Alexandre Défossez, Laurent Mazaré, Manu Orsini, Amélie Royer, Patrick Pérez, Hervé Jégou, Edouard Grave and Neil Zeghidour.

Moshi is a speech-text foundation model that casts spoken dialogue as speech-to-speech generation. Starting from a text language model backbone, Moshi generates speech as tokens from the residual quantizer of a neural audio codec, while modeling separately its own speech and that of the user into parallel streams. This allows for the removal of explicit speaker turns, and the modeling of arbitrary conversational dynamics. Moshi also predicts time-aligned text tokens as a prefix to audio tokens. This “Inner Monologue” method significantly improves the linguistic quality of generated speech and provides streaming speech recognition and text-to-speech. As a result, Moshi is the first real-time full-duplex spoken large language model, with a theoretical latency of 160ms, 200ms in practice.

image

Zamba

Zamba-7B-v1 is a hybrid between state-space models (Specifically Mamba) and transformer, and was trained using

... (truncated)

Commits

Updates transformers from 4.38.0 to 4.46.3

Release notes

Sourced from transformers's releases.

Patch release v4.46.3

One small fix for FSDP + gradient accumulation loss issue!

Patch release v4.46.2

Mostly had to finish the gradient accumulation ! Thanks to @​techkang and @​Ryukijano 🤗

Patch release v4.46.1

Patch release v4.4.61

This is mostly for fx and onnx issues!

** Fix regression loading dtype #34409 by @​SunMarc ** LLaVa: latency issues #34460 by @​zucchini-nlp ** Fix pix2struct #34374 by @​IlyasMoutawwakil ** Fix onnx non-exposable inplace aten op #34376 by @​IlyasMoutawwakil ** Fix torch.fx issue related to the new loss_kwargs keyword argument #34380 by @​michaelbenayoun

Release v4.46.0

New model additions

Moshi

The Moshi model was proposed in Moshi: a speech-text foundation model for real-time dialogue by Alexandre Défossez, Laurent Mazaré, Manu Orsini, Amélie Royer, Patrick Pérez, Hervé Jégou, Edouard Grave and Neil Zeghidour.

Moshi is a speech-text foundation model that casts spoken dialogue as speech-to-speech generation. Starting from a text language model backbone, Moshi generates speech as tokens from the residual quantizer of a neural audio codec, while modeling separately its own speech and that of the user into parallel streams. This allows for the removal of explicit speaker turns, and the modeling of arbitrary conversational dynamics. Moshi also predicts time-aligned text tokens as a prefix to audio tokens. This “Inner Monologue” method significantly improves the linguistic quality of generated speech and provides streaming speech recognition and text-to-speech. As a result, Moshi is the first real-time full-duplex spoken large language model, with a theoretical latency of 160ms, 200ms in practice.

image

Zamba

Zamba-7B-v1 is a hybrid between state-space models (Specifically Mamba) and transformer, and was trained using

... (truncated)

Commits

Updates transformers from 4.38.0 to 4.46.3

Release notes

Sourced from transformers's releases.

Patch release v4.46.3

One small fix for FSDP + gradient accumulation loss issue!

Patch release v4.46.2

Mostly had to finish the gradient accumulation ! Thanks to @​techkang and @​Ryukijano 🤗

Patch release v4.46.1

Patch release v4.4.61

This is mostly for fx and onnx issues!

** Fix regression loading dtype #34409 by @​SunMarc ** LLaVa: latency issues #34460 by @​zucchini-nlp ** Fix pix2struct #34374 by @​IlyasMoutawwakil ** Fix onnx non-exposable inplace aten op #34376 by @​IlyasMoutawwakil ** Fix torch.fx issue related to the new loss_kwargs keyword argument #34380 by @​michaelbenayoun

Release v4.46.0

New model additions

Moshi

The Moshi model was proposed in Moshi: a speech-text foundation model for real-time dialogue by Alexandre Défossez, Laurent Mazaré, Manu Orsini, Amélie Royer, Patrick Pérez, Hervé Jégou, Edouard Grave and Neil Zeghidour.

Moshi is a speech-text foundation model that casts spoken dialogue as speech-to-speech generation. Starting from a text language model backbone, Moshi generates speech as tokens from the residual quantizer of a neural audio codec, while modeling separately its own speech and that of the user into parallel streams. This allows for the removal of explicit speaker turns, and the modeling of arbitrary conversational dynamics. Moshi also predicts time-aligned text tokens as a prefix to audio tokens. This “Inner Monologue” method significantly improves the linguistic quality of generated speech and provides streaming speech recognition and text-to-speech. As a result, Moshi is the first real-time full-duplex spoken large language model, with a theoretical latency of 160ms, 200ms in practice.

image

Zamba

Zamba-7B-v1 is a hybrid between state-space models (Specifically Mamba) and transformer, and was trained using

... (truncated)

Commits

Updates transformers from 4.38.0 to 4.46.3

Release notes

Sourced from transformers's releases.

Patch release v4.46.3

One small fix for FSDP + gradient accumulation loss issue!

Patch release v4.46.2

Mostly had to finish the gradient accumulation ! Thanks to @​techkang and @​Ryukijano 🤗

Patch release v4.46.1

Patch release v4.4.61

This is mostly for fx and onnx issues!

** Fix regression loading dtype #34409 by @​SunMarc ** LLaVa: latency issues #34460 by @​zucchini-nlp ** Fix pix2struct #34374 by @​IlyasMoutawwakil ** Fix onnx non-exposable inplace aten op #34376 by @​IlyasMoutawwakil ** Fix torch.fx issue related to the new loss_kwargs keyword argument #34380 by @​michaelbenayoun

Release v4.46.0

New model additions

Moshi

The Moshi model was proposed in Moshi: a speech-text foundation model for real-time dialogue by Alexandre Défossez, Laurent Mazaré, Manu Orsini, Amélie Royer, Patrick Pérez, Hervé Jégou, Edouard Grave and Neil Zeghidour.

Moshi is a speech-text foundation model that casts spoken dialogue as speech-to-speech generation. Starting from a text language model backbone, Moshi generates speech as tokens from the residual quantizer of a neural audio codec, while modeling separately its own speech and that of the user into parallel streams. This allows for the removal of explicit speaker turns, and the modeling of arbitrary conversational dynamics. Moshi also predicts time-aligned text tokens as a prefix to audio tokens. This “Inner Monologue” method significantly improves the linguistic quality of generated speech and provides streaming speech recognition and text-to-speech. As a result, Moshi is the first real-time full-duplex spoken large language model, with a theoretical latency of 160ms, 200ms in practice.

image

Zamba

Zamba-7B-v1 is a hybrid between state-space models (Specifically Mamba) and transformer, and was trained using

... (truncated)

Commits

Updates transformers from 4.38.0 to 4.46.3

Release notes

Sourced from transformers's releases.

Patch release v4.46.3

One small fix for FSDP + gradient accumulation loss issue!

Patch release v4.46.2

Mostly had to finish the gradient accumulation ! Thanks to @​techkang and @​Ryukijano 🤗

Patch release v4.46.1

Patch release v4.4.61

This is mostly for fx and onnx issues!

** Fix regression loading dtype #34409 by @​SunMarc ** LLaVa: latency issues #34460 by @​zucchini-nlp ** Fix pix2struct #34374 by @​IlyasMoutawwakil ** Fix onnx non-exposable inplace aten op #34376 by @​IlyasMoutawwakil ** Fix torch.fx issue related to the new loss_kwargs keyword argument #34380 by @​michaelbenayoun

Release v4.46.0

New model additions

Moshi

The Moshi model was proposed in Moshi: a speech-text foundation model for real-time dialogue by Alexandre Défossez, Laurent Mazaré, Manu Orsini, Amélie Royer, Patrick Pérez, Hervé Jégou, Edouard Grave and Neil Zeghidour.

Moshi is a speech-text foundation model that casts spoken dialogue as speech-to-speech generation. Starting from a text language model backbone, Moshi generates speech as tokens from the residual quantizer of a neural audio codec, while modeling separately its own speech and that of the user into parallel streams. This allows for the removal of explicit speaker turns, and the modeling of arbitrary conversational dynamics. Moshi also predicts time-aligned text tokens as a prefix to audio tokens. This “Inner Monologue” method significantly improves the linguistic quality of generated speech and provides streaming speech recognition and text-to-speech. As a result, Moshi is the first real-time full-duplex spoken large language model, with a theoretical latency of 160ms, 200ms in practice.

image

Zamba

Zamba-7B-v1 is a hybrid between state-space models (Specifically Mamba) and transformer, and was trained using

... (truncated)

Commits

Updates transformers from 4.38.0 to 4.46.3

Release notes

Sourced from transformers's releases.

Patch release v4.46.3

One small fix for FSDP + gradient accumulation loss issue!

Patch release v4.46.2

Mostly had to finish the gradient accumulation ! Thanks to @​techkang and @​Ryukijano 🤗

Patch release v4.46.1

Patch release v4.4.61

This is mostly for fx and onnx issues!

** Fix regression loading dtype #34409 by @​SunMarc ** LLaVa: latency issues #34460 by @​zucchini-nlp ** Fix pix2struct #34374 by @​IlyasMoutawwakil ** Fix onnx non-exposable inplace aten op #34376 by @​IlyasMoutawwakil ** Fix torch.fx issue related to the new loss_kwargs keyword argument #34380 by @​michaelbenayoun

Release v4.46.0

New model additions

Moshi

The Moshi model was proposed in Moshi: a speech-text foundation model for real-time dialogue by Alexandre Défossez, Laurent Mazaré, Manu Orsini, Amélie Royer, Patrick Pérez, Hervé Jégou, Edouard Grave and Neil Zeghidour.

Moshi is a speech-text foundation model that casts spoken dialogue as speech-to-speech generation. Starting from a text language model backbone, Moshi generates speech as tokens from the residual quantizer of a neural audio codec, while modeling separately its own speech and that of the user into parallel streams. This allows for the removal of explicit speaker turns, and the modeling of arbitrary conversational dynamics. Moshi also predicts time-aligned text tokens as a prefix to audio tokens. This “Inner Monologue” method significantly improves the linguistic quality of generated speech and provides streaming speech recognition and text-to-speech. As a result, Moshi is the first real-time full-duplex spoken large language model, with a theoretical latency of 160ms, 200ms in practice.

image

Zamba

Zamba-7B-v1 is a hybrid between state-space models (Specifically Mamba) and transformer, and was trained using

... (truncated)

Commits

Updates transformers from 4.38.0 to 4.46.3

Release notes

Sourced from transformers's releases.

Patch release v4.46.3

One small fix for FSDP + gradient accumulation loss issue!

Patch release v4.46.2

Mostly had to finish the gradient accumulation ! Thanks to @​techkang and @​Ryukijano 🤗

Patch release v4.46.1

Patch release v4.4.61

This is mostly for fx and onnx issues!

** Fix regression loading dtype #34409 by @​SunMarc ** LLaVa: latency issues #34460 by @​zucchini-nlp ** Fix pix2struct #34374 by @​IlyasMoutawwakil ** Fix onnx non-exposable inplace aten op #34376 by @​IlyasMoutawwakil ** Fix torch.fx issue related to the new loss_kwargs keyword argument #34380 by @​michaelbenayoun

Release v4.46.0

New model additions

Moshi

The Moshi model was proposed in Moshi: a speech-text foundation model for real-time dialogue by Alexandre Défossez, Laurent Mazaré, Manu Orsini, Amélie Royer, Patrick Pérez, Hervé Jégou, Edouard Grave and Neil Zeghidour.

Moshi is a speech-text foundation model that casts spoken dialogue as speech-to-speech generation. Starting from a text language model backbone, Moshi generates speech as tokens from the residual quantizer of a neural audio codec, while modeling separately its own speech and that of the user into parallel streams. This allows for the removal of explicit speaker turns, and the modeling of arbitrary conversational dynamics. Moshi also predicts time-aligned text tokens as a prefix to audio tokens. This “Inner Monologue” method significantly improves the linguistic quality of generated speech and provides streaming speech recognition and text-to-speech. As a result, Moshi is the first real-time full-duplex spoken large language model, with a theoretical latency of 160ms, 200ms in practice.

image

Zamba

Zamba-7B-v1 is a hybrid between state-space models (Specifically Mamba) and transformer, and was trained using

... (truncated)

Commits

Updates tqdm from 4.66.3 to 4.67.0

Release notes

Sourced from tqdm's releases.

tqdm v4.67.0 stable

  • contrib.discord: replace disco-py with requests (#1536)

tqdm v4.66.6 stable

  • cli: zip-safe --manpath, --comppath (#1627)
  • misc framework updates (#1627)
    • fix pytest DeprecationWarning
    • fix snapcraft build
    • fix nbval DeprecationWarning
    • update & tidy workflows
    • bump pre-commit
    • docs: update URLs

tqdm v4.66.5 stable

tqdm v4.66.4 stable

  • rich: fix completion (#1395 <- #1306)
  • minor framework updates & code tidy (#1578)
Commits

Updates torch from 2.2.0 to 2.5.1

Release notes

Sourced from torch's releases.

PyTorch 2.5.1: bug fix release

This release is meant to fix the following regressions:

Besides the regression fixes, the release includes several documentation updates.

See release tracker pytorch/pytorch#132400 for additional information.

PyTorch 2.5.0 Release, SDPA CuDNN backend, Flex Attention

PyTorch 2.5 Release Notes

  • Highlights
  • Backwards Incompatible Change
  • Deprecations
  • New Features
  • Improvements
  • Bug fixes
  • Performance
  • Documentation
  • Developers
  • Security

Highlights

We are excited to announce the release of PyTorch® 2.5! This release features a new CuDNN backend for SDPA, enabling speedups by default for users of SDPA on H100s or newer GPUs. As well, regional compilation of torch.compile offers a way to reduce the cold start up time for torch.compile by allowing users to compile a repeated nn.Module (e.g. a transformer layer in LLM) without recompilations. Finally, TorchInductor CPP backend offers solid performance speedup with numerous enhancements like FP16 support, CPP wrapper, AOT-Inductor mode, and max-autotune mode...

Description has been truncated

Bumps the pip group with 1 update in the /compression/gpt2 directory: [transformers](https://github.com/huggingface/transformers).
Bumps the pip group with 1 update in the /inference/huggingface/automatic-speech-recognition directory: [transformers](https://github.com/huggingface/transformers).
Bumps the pip group with 1 update in the /inference/huggingface/fill-mask directory: [transformers](https://github.com/huggingface/transformers).
Bumps the pip group with 1 update in the /inference/huggingface/text-generation directory: [transformers](https://github.com/huggingface/transformers).
Bumps the pip group with 1 update in the /inference/huggingface/text-generation/run-generation-script directory: [transformers](https://github.com/huggingface/transformers).
Bumps the pip group with 1 update in the /inference/huggingface/text2text-generation directory: [transformers](https://github.com/huggingface/transformers).
Bumps the pip group with 1 update in the /inference/huggingface/translation directory: [transformers](https://github.com/huggingface/transformers).
Bumps the pip group with 2 updates in the /training/HelloDeepSpeed directory: [transformers](https://github.com/huggingface/transformers) and [tqdm](https://github.com/tqdm/tqdm).
Bumps the pip group with 14 updates in the /training/MoQ/huggingface-transformers/examples/research_projects/lxmert directory:

| Package | From | To |
| --- | --- | --- |
| [torch](https://github.com/pytorch/pytorch) | `2.2.0` | `2.5.1` |
| [numpy](https://github.com/numpy/numpy) | `1.22.0` | `2.1.3` |
| [tqdm](https://github.com/tqdm/tqdm) | `4.66.3` | `4.67.0` |
| [certifi](https://github.com/certifi/python-certifi) | `2024.7.4` | `2024.8.30` |
| [future](https://github.com/PythonCharmers/python-future) | `0.18.3` | `1.0.0` |
| [idna](https://github.com/kjd/idna) | `3.7` | `3.10` |
| [joblib](https://github.com/joblib/joblib) | `1.2.0` | `1.4.2` |
| [jupyter-core](https://github.com/jupyter/jupyter_core) | `4.11.2` | `5.7.2` |
| [nbconvert](https://github.com/jupyter/nbconvert) | `6.5.1` | `7.16.4` |
| [notebook](https://github.com/jupyter/notebook) | `6.4.12` | `7.2.2` |
| [pillow](https://github.com/python-pillow/Pillow) | `10.3.0` | `11.0.0` |
| [pyarrow](https://github.com/apache/arrow) | `14.0.1` | `18.0.0` |
| [requests](https://github.com/psf/requests) | `2.32.2` | `2.32.3` |
| [urllib3](https://github.com/urllib3/urllib3) | `1.26.19` | `2.2.3` |



Updates `transformers` from 4.38.0 to 4.46.3
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.38.0...v4.46.3)

Updates `transformers` from 4.38.0 to 4.46.3
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.38.0...v4.46.3)

Updates `transformers` from 4.38.0 to 4.46.3
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.38.0...v4.46.3)

Updates `transformers` from 4.38.0 to 4.46.3
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.38.0...v4.46.3)

Updates `transformers` from 4.38.0 to 4.46.3
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.38.0...v4.46.3)

Updates `transformers` from 4.38.0 to 4.46.3
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.38.0...v4.46.3)

Updates `transformers` from 4.38.0 to 4.46.3
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.38.0...v4.46.3)

Updates `transformers` from 4.38.0 to 4.46.3
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.38.0...v4.46.3)

Updates `tqdm` from 4.66.3 to 4.67.0
- [Release notes](https://github.com/tqdm/tqdm/releases)
- [Commits](tqdm/tqdm@v4.66.3...v4.67.0)

Updates `torch` from 2.2.0 to 2.5.1
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](pytorch/pytorch@v2.2.0...v2.5.1)

Updates `numpy` from 1.22.0 to 2.1.3
- [Release notes](https://github.com/numpy/numpy/releases)
- [Changelog](https://github.com/numpy/numpy/blob/main/doc/RELEASE_WALKTHROUGH.rst)
- [Commits](numpy/numpy@v1.22.0...v2.1.3)

Updates `tqdm` from 4.66.3 to 4.67.0
- [Release notes](https://github.com/tqdm/tqdm/releases)
- [Commits](tqdm/tqdm@v4.66.3...v4.67.0)

Updates `certifi` from 2024.7.4 to 2024.8.30
- [Commits](certifi/python-certifi@2024.07.04...2024.08.30)

Updates `future` from 0.18.3 to 1.0.0
- [Release notes](https://github.com/PythonCharmers/python-future/releases)
- [Changelog](https://github.com/PythonCharmers/python-future/blob/master/docs/changelog.rst)
- [Commits](PythonCharmers/python-future@v0.18.3...v1.0.0)

Updates `idna` from 3.7 to 3.10
- [Release notes](https://github.com/kjd/idna/releases)
- [Changelog](https://github.com/kjd/idna/blob/master/HISTORY.rst)
- [Commits](kjd/idna@v3.7...v3.10)

Updates `joblib` from 1.2.0 to 1.4.2
- [Release notes](https://github.com/joblib/joblib/releases)
- [Changelog](https://github.com/joblib/joblib/blob/main/CHANGES.rst)
- [Commits](joblib/joblib@1.2.0...1.4.2)

Updates `jupyter-core` from 4.11.2 to 5.7.2
- [Release notes](https://github.com/jupyter/jupyter_core/releases)
- [Changelog](https://github.com/jupyter/jupyter_core/blob/main/CHANGELOG.md)
- [Commits](jupyter/jupyter_core@4.11.2...v5.7.2)

Updates `nbconvert` from 6.5.1 to 7.16.4
- [Release notes](https://github.com/jupyter/nbconvert/releases)
- [Changelog](https://github.com/jupyter/nbconvert/blob/main/CHANGELOG.md)
- [Commits](jupyter/nbconvert@6.5.1...v7.16.4)

Updates `notebook` from 6.4.12 to 7.2.2
- [Release notes](https://github.com/jupyter/notebook/releases)
- [Changelog](https://github.com/jupyter/notebook/blob/@jupyter-notebook/[email protected]/CHANGELOG.md)
- [Commits](https://github.com/jupyter/notebook/compare/6.4.12...@jupyter-notebook/[email protected])

Updates `pillow` from 10.3.0 to 11.0.0
- [Release notes](https://github.com/python-pillow/Pillow/releases)
- [Changelog](https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst)
- [Commits](python-pillow/Pillow@10.3.0...11.0.0)

Updates `pyarrow` from 14.0.1 to 18.0.0
- [Release notes](https://github.com/apache/arrow/releases)
- [Commits](apache/arrow@go/v14.0.1...apache-arrow-18.0.0)

Updates `requests` from 2.32.2 to 2.32.3
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](psf/requests@v2.32.2...v2.32.3)

Updates `urllib3` from 1.26.19 to 2.2.3
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](urllib3/urllib3@1.26.19...2.2.3)

---
updated-dependencies:
- dependency-name: transformers
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: transformers
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: transformers
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: transformers
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: transformers
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: transformers
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: transformers
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: transformers
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: tqdm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: torch
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: numpy
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: tqdm
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: certifi
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: future
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: idna
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: joblib
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: jupyter-core
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: nbconvert
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: notebook
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: pillow
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: pyarrow
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: requests
  dependency-type: direct:production
  dependency-group: pip
- dependency-name: urllib3
  dependency-type: direct:production
  dependency-group: pip
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Nov 23, 2024
@akaday akaday merged commit 89007e2 into master Nov 25, 2024
@dependabot dependabot bot deleted the dependabot/pip/compression/gpt2/pip-6b7ccbe0aa branch November 25, 2024 20:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant