Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: Update build.yml to suppress warnings about node.js versions #2166

Merged
merged 3 commits into from
May 19, 2024

Conversation

tamo
Copy link
Contributor

@tamo tamo commented May 18, 2024

Just increasing the version numbers.
The log of actions had a number of warnings:
image

@ggerganov ggerganov merged commit 4798be1 into ggerganov:master May 19, 2024
49 checks passed
@tamo tamo deleted the patch-4 branch May 19, 2024 10:02
jiahansu pushed a commit to WiseSync/whisper.cpp that referenced this pull request May 28, 2024
…rganov#2166)

* Update actions to suppress warnings about old node.js

https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/

* Update actions/upload-artifact, specify android cmdline-tools-version

* Use java 20

gradle 8.1 complains against 21
https://docs.gradle.org/current/userguide/compatibility.html
bygreencn added a commit to bygreencn/whisper.cpp that referenced this pull request Aug 9, 2024
* tag 'v1.6.2':
  release : v1.6.2
  Revert "whisper : remove extra backend instance (huh?)" (ggerganov#2182)
  server : fix typo (ggerganov#2181)
  ruby : update bindings (ggerganov#2154)
  release : v1.6.1
  examples : add support for decoding input with ffmpeg (Linux) (ggerganov#2133)
  node : add flash_attn param (ggerganov#2170)
  ci: Update build.yml to suppress warnings about node.js versions (ggerganov#2166)
  release : v1.6.0
  whisper : use flash attention (ggerganov#2152)
  talk-llama : reject runs without required arguments (ggerganov#2153)
  sync : ggml
  metal : support FA without mask + add asserts (llama/7278)
  ggml : add RPC backend (llama/6829)
  rm wait() (llama/7233)
  CUDA: add FP32 FlashAttention vector kernel (llama/7188)
  scripts : sync ggml-rpc
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants