Is replacing LLM-generated variations with a fixed mock list in map_query_semantic_space a mistake? #1320
Unanswered
zboyr
asked this question in
Forums - Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi team,
I was reviewing the code for map_query_semantic_space in adaptive_crawler.py (lines 668–737), and I noticed that the original logic for generating query variations using an LLM (with perform_completion_with_backoff) has been commented out. Instead, a hardcoded set of mock queries is now being used:
Is this change intentional, or could it be an error? Using a static list of queries seems to defeat the purpose of generating diverse semantic variations for the embedding process. Was this meant as a temporary measure for testing, or should we revert to the dynamic LLM-based generation?
Would appreciate any clarification or context regarding this change!
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions