Skip to content

daily_ete_test

daily_ete_test #744

Re-run triggered September 15, 2025 05:07
Status Failure
Total duration 3d 0h 0m 5s
Artifacts 1

daily_ete_test.yml

on: schedule
Matrix: linux-build
test_quantization
1h 9m
test_quantization
Matrix: test_evaluation
Matrix: test_restful
Matrix: test_tools
get_benchmark_result
0s
get_benchmark_result
get_coverage_report
0s
get_coverage_report
notify_to_feishu
0s
notify_to_feishu
Fit to window
Zoom out
Zoom in

Annotations

22 errors
test_restful (turbomind, Intern-S1)
The action 'Test lmdeploy - restful api' has timed out after 30 minutes.
test_evaluation (chat)
Process completed with exit code 1.
test_restful (turbomind, internlm2_5-20b)
The job was not acquired by Runner of type self-hosted even after multiple attempts
test_restful (pytorch, internlm2_5-20b)
The job was not acquired by Runner of type self-hosted even after multiple attempts
test_restful (turbomind, internlm2_5-20b-chat)
The job was not acquired by Runner of type self-hosted even after multiple attempts
test_evaluation (base)
The job was not acquired by Runner of type self-hosted even after multiple attempts
test_tools (pytorch, llm, pipeline)
The self-hosted runner lost communication with the server. Verify the machine is running and has a healthy network connection. Anything in your workflow that terminates the runner process, starves it for CPU/Memory, or blocks its network access can cause this error.
test_tools (pytorch, mllm, restful)
Process completed with exit code 1.
test_tools (pytorch, llm, chat)
The self-hosted runner lost communication with the server. Verify the machine is running and has a healthy network connection. Anything in your workflow that terminates the runner process, starves it for CPU/Memory, or blocks its network access can cause this error.
test_tools (turbomind, llm, chat)
Process completed with exit code 1.
test_tools (pytorch, llm, restful)
The self-hosted runner lost communication with the server. Verify the machine is running and has a healthy network connection. Anything in your workflow that terminates the runner process, starves it for CPU/Memory, or blocks its network access can cause this error.
test_tools (turbomind, llm, local_case)
The job has exceeded the maximum execution time while awaiting a runner for 24h0m0s
test_tools (turbomind, llm, restful)
The job has exceeded the maximum execution time while awaiting a runner for 24h0m0s
get_benchmark_result
The job has exceeded the maximum execution time while awaiting a runner for 24h0m0s
test_tools (pytorch, mllm, pipeline)
The job has exceeded the maximum execution time while awaiting a runner for 24h0m0s
test_tools (turbomind, mllm, pipeline)
The job has exceeded the maximum execution time while awaiting a runner for 24h0m0s
test_restful (pytorch, Intern-S1)
The job has exceeded the maximum execution time while awaiting a runner for 24h0m0s
test_tools (turbomind, llm, pipeline)
The job has exceeded the maximum execution time while awaiting a runner for 24h0m0s
test_tools (turbomind, mllm, restful)
The job has exceeded the maximum execution time while awaiting a runner for 24h0m0s
get_coverage_report
The job has exceeded the maximum execution time while awaiting a runner for 24h0m0s
daily_ete_test
Internal server error. Correlation ID: f54aaec3-9abe-43e6-8e6c-ea653f569cd9
notify_to_feishu
The job has exceeded the maximum execution time while awaiting a runner for 24h0m0s