-
Notifications
You must be signed in to change notification settings - Fork 46
Open
Labels
questionFurther information is requestedFurther information is requested
Description
The lm-polygraph I used is the latest version.
When run this scripts:
export VLLM_WORKER_MULTIPROC_METHOD=spawn
CUDA_VISIBLE_DEVICES=0 HYDRA_CONFIG=....../lm-polygraph/examples/configs/polygraph_eval_coqa.yaml polygraph_eval \
model=vllm \
model.path=....../models/qwen2.5-7b/main \
estimators=default_estimators_vllm \
stat_calculators=default_calculators_vllm \
subsample_eval_dataset=100
The model is in a local path, and I can successfully load it with transformers. So, I think the problem may be in some place of lm-polygraph.
I have this Error:
INFO 07-16 17:08:14 [core.py:138] init engine (profile, create kv cache, warmup model) took 52.77 seconds
Error executing job with overrides: ['model=vllm', 'model.path=......', 'estimators=default_estimators_vllm', 'stat_calculators=default_calculators_vllm', 'subsample_eval_dataset=100']
Traceback (most recent call last):
File "....../polygraph/bin/polygraph_eval", line 58, in main
model = get_model(args)
^^^^^^^^^^^^^^^
File "....../polygraph/bin/polygraph_eval", line 447, in get_model
return get_whitebox_model(args, cache_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "....../polygraph/bin/polygraph_eval", line 499, in get_whitebox_model
tokenizer = load_module.load_tokenizer(**load_tok_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'external_module' has no attribute 'load_tokenizer'
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
[rank0]:[W716 17:08:15.799640413 ProcessGroupNCCL.cpp:1496] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
Metadata
Metadata
Assignees
Labels
questionFurther information is requestedFurther information is requested