Skip to content

Conversation

adamdebono
Copy link
Contributor

The max_len/ml parameter was missing in the Ruby binding.

@KitaitiMakoto KitaitiMakoto changed the title Add ruby binding for max_len ruby : Add ruby binding for max_len Aug 6, 2025
@KitaitiMakoto
Copy link
Collaborator

@adamdebono
Thank you! Seems almost okay. I added a comment, and can you add test for max_len in test_params.rb?

@adamdebono
Copy link
Contributor Author

Thanks @KitaitiMakoto I've made those changes.

I wasn't able to get the tests running so I haven't confirmed if they pass, although I've reused the same code as the other tests so I assume they do.

I am attempting to use rake test as is mentioned in the readme, but I'm getting a segfault. I've attached the log, but it looks like the problem is in test_callback.rb#125 - I had noticed I was getting a segfault in my application if I attempted to set that multiple times, presumably this is the same problem.

rake_test.log

@KitaitiMakoto
Copy link
Collaborator

Ah, that segfault is caused by Ruby 3.4.1's bug. Tests should succeed for Ruby 3.4.2 or later.

Your code has no problem. Will merge it.

@KitaitiMakoto KitaitiMakoto merged commit 4245c77 into ggml-org:master Aug 7, 2025
55 checks passed
bygreencn added a commit to bygreencn/whisper.cpp that referenced this pull request Sep 24, 2025
* ggerganov/master: (72 commits)
  node : add win platform check for require path (ggml-org#3363)
  ci : update main-cuda.Dockerfile (ggml-org#3371)
  whisper : fixed crash in GPU device selection on multi-GPU systems (ggml-org#3372)
  wasm : change ggml model host to HF (ggml-org#3369)
  ruby : Add ruby binding for max_len (ggml-org#3365)
  stream.wasm : add language selection support (ggml-org#3354)
  whisper : reset conv scheduler when CoreML is used (ggml-org#3350)
  ggml : remove old kompute, cann (skip) (ggml-org#3349)
  talk-llama : sync llama.cpp
  sync : ggml
  vulkan : add fp16 support for the conv_2d kernel (llama/14872)
  vulkan: skip empty set_rows to avoid invalid API usage (llama/14860)
  HIP: Enable Matrix cores for MMQ Kernels, Enable stream-K for CDNA 3 (llama/14624)
  CANN: Implement GLU ops (llama/14884)
  musa: fix build warnings (unused variable) (llama/14869)
  ggml-cpu : disable GGML_NNPA by default due to instability (llama/14880)
  metal: SSM_SCAN performance (llama/14743)
  opencl: add fused `rms_norm_mul` (llama/14841)
  ggml : remove invalid portPos specifiers from dot files (llama/14838)
  rpc : check for null buffers in get/set/copy tensor endpoints (llama/14868)
  ...
@adamdebono
Copy link
Contributor Author

@KitaitiMakoto Can a new version of the gem be published with this fix? I noticed 1.8.0 was released last week but there's been no update to the gem.

@KitaitiMakoto
Copy link
Collaborator

I created a pull request for version up: #3461
After it will be merged, we need ask release to the gem owner.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants