Skip to content

Conversation

@scosman
Copy link
Collaborator

@scosman scosman commented Nov 21, 2025

Summary by CodeRabbit

Release Notes

  • Chores
    • Reorganized dependencies to make advanced features optional, reducing default package footprint
    • RAG and Google Vertex AI features now available as optional installations
    • Improved startup performance for users not utilizing optional functionality

✏️ Tip: You can customize this high-level summary in your review settings.

…r. WIP, imports should be optional, and CI may not work
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 21, 2025

Important

Review skipped

Ignore keyword(s) in the title.

⛔ Ignored keywords (2)
  • WIP
  • Draft

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

This PR refactors the codebase to reorganize optional dependencies into separate groups (google_vertex, rag) and implements lazy/conditional imports via TYPE_CHECKING and optional_import patterns, deferring the loading of optional dependencies until runtime rather than at module initialization.

Changes

Cohort / File(s) Summary
Dependency Configuration
app/desktop/pyproject.toml, libs/core/pyproject.toml, pyproject.toml
Moved optional dependencies (google-cloud-aiplatform, vertexai, lancedb, llama-index, pypdf, pypdfium2, pillow) into optional groups [google_vertex] and [rag] in libs/core; updated app/desktop to reference workspace-based kiln-ai extras; added together>=1.5.0 to root dependencies
Chunkers Module Refactoring
libs/core/kiln_ai/adapters/chunkers/__init__.py, libs/core/kiln_ai/adapters/chunkers/chunker_registry.py
Removed fixed_window_chunker and semantic_chunker from public exports; converted top-level imports in registry to lazy local imports within match branches
Chunker Implementations – Lazy Imports
libs/core/kiln_ai/adapters/chunkers/embedding_wrapper.py, libs/core/kiln_ai/adapters/chunkers/fixed_window_chunker.py, libs/core/kiln_ai/adapters/chunkers/semantic_chunker.py
Replaced direct top-level imports of llama_index components with TYPE_CHECKING blocks and runtime optional_import for BaseEmbedding, SentenceSplitter, and semantic node parser types
Fine-tune Registry Refactoring
libs/core/kiln_ai/adapters/fine_tune/finetune_registry.py
Replaced static dict-based registry with lazy _FinetuneRegistry class; added get_finetune_adapter_class() function; moved VertexFinetune import to lazy runtime resolution
Vertex Fine-tune – Lazy Imports
libs/core/kiln_ai/adapters/fine_tune/vertex_finetune.py
Converted top-level imports of vertexai and google-cloud-storage to TYPE_CHECKING blocks with runtime optional_import resolution
Vector Store Adapters – Lazy Imports
libs/core/kiln_ai/adapters/vector_store/lancedb_adapter.py, libs/core/kiln_ai/adapters/vector_store/lancedb_helpers.py, libs/core/kiln_ai/adapters/vector_store/vector_store_registry.py
Replaced direct llama_index imports with TYPE_CHECKING guards and optional_import runtime resolution; moved LanceDBAdapter import to lazy local scope in registry function
Vector Store Loaders – Lazy Imports
libs/core/kiln_ai/adapters/vector_store_loaders/vector_store_loader.py
Converted TextNode import from direct top-level to TYPE_CHECKING block with optional_import fallback
PDF Utilities & Extractors – Lazy Imports
libs/core/kiln_ai/utils/pdf_utils.py, libs/core/kiln_ai/adapters/extractors/litellm_extractor.py
Replaced top-level pypdf/pypdfium2 imports with TYPE_CHECKING blocks and optional_import; moved convert_pdf_to_images and split_pdf_into_pages imports to local scope within calling methods

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Extra attention required:
    • libs/core/kiln_ai/adapters/fine_tune/finetune_registry.py — introduces a new _FinetuneRegistry class with getitem and get methods; verify lazy import behavior and that callers can seamlessly use the new registry object in place of the old dict
    • libs/core/pyproject.toml — dependency reorganization into optional groups; ensure optional-dependencies syntax is correct and that extras resolution works correctly downstream
    • app/desktop/pyproject.toml — workspace reference to kiln-ai with extras; verify UV workspace resolution works as intended
    • All files using optional_import — confirm that the optional_import utility handles missing optional dependencies gracefully and that error messages are clear when extras are not installed

Possibly related PRs

Suggested reviewers

  • sfierro
  • chiang-daniel

Poem

🐰 Imports once eager, now they rest and wait,
Dependencies bundled, grouped, and late,
Lazy paths blossom where once they stood,
Optional extras—lighter than wood!
The registry learns to fetch on demand,
A cleaner codebase, oh so grand! ✨

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The pull request description is completely empty. The author provided no description despite a clear template being available that requires sections like 'What does this PR do?', 'Related Issues', and checklists. Add a description following the template: explain what the PR does (making dependencies optional), link any related issues, confirm CLA agreement, and complete the checklists.
Docstring Coverage ⚠️ Warning Docstring coverage is 10.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main objective: making larger dependencies optional to reduce the initial library size, which aligns with all the changes in the PR.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link

📊 Coverage Report

Overall Coverage: 92%

Diff: origin/main...HEAD

No lines with coverage information in this diff.


@scosman
Copy link
Collaborator Author

scosman commented Nov 21, 2025

@CodeRabbit review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 21, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
libs/core/kiln_ai/adapters/fine_tune/vertex_finetune.py (1)

3-31: Vertex AI TYPE_CHECKING/optional_import wiring looks correct, with a small import-path nuance

The TYPE_CHECKING block plus optional_import usage makes this adapter safe to import when the google_vertex extras aren’t installed, while keeping type checkers happy—nice fit with the optional-deps goal. The only subtlety is google_cloud_storage = optional_import("google.cloud", "google_vertex", "google-cloud-storage"); storage = google_cloud_storage.storage, which relies on google.cloud exposing a storage attribute; if you run into import issues here, consider switching to importing "google.cloud.storage" directly via optional_import instead. This keeps the change focused strictly on import mechanics, not behavior. Based on learnings.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e33b8b0 and 1985721.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (16)
  • app/desktop/pyproject.toml (2 hunks)
  • libs/core/kiln_ai/adapters/chunkers/__init__.py (1 hunks)
  • libs/core/kiln_ai/adapters/chunkers/chunker_registry.py (1 hunks)
  • libs/core/kiln_ai/adapters/chunkers/embedding_wrapper.py (1 hunks)
  • libs/core/kiln_ai/adapters/chunkers/fixed_window_chunker.py (1 hunks)
  • libs/core/kiln_ai/adapters/chunkers/semantic_chunker.py (2 hunks)
  • libs/core/kiln_ai/adapters/extractors/litellm_extractor.py (2 hunks)
  • libs/core/kiln_ai/adapters/fine_tune/finetune_registry.py (1 hunks)
  • libs/core/kiln_ai/adapters/fine_tune/vertex_finetune.py (2 hunks)
  • libs/core/kiln_ai/adapters/vector_store/lancedb_adapter.py (2 hunks)
  • libs/core/kiln_ai/adapters/vector_store/lancedb_helpers.py (1 hunks)
  • libs/core/kiln_ai/adapters/vector_store/vector_store_registry.py (1 hunks)
  • libs/core/kiln_ai/adapters/vector_store_loaders/vector_store_loader.py (2 hunks)
  • libs/core/kiln_ai/utils/pdf_utils.py (1 hunks)
  • libs/core/pyproject.toml (2 hunks)
  • pyproject.toml (2 hunks)
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: scosman
Repo: Kiln-AI/Kiln PR: 733
File: app/web_ui/src/lib/stores/local_storage_store.ts:9-11
Timestamp: 2025-10-21T00:06:57.115Z
Learning: When scosman is refactoring code by moving it to a new location, he prefers to keep the moved code unchanged and not mix in functional improvements or bug fixes during the refactor.
📚 Learning: 2025-10-29T11:19:31.059Z
Learnt from: leonardmq
Repo: Kiln-AI/Kiln PR: 759
File: app/web_ui/src/lib/api_schema.d.ts:2485-2490
Timestamp: 2025-10-29T11:19:31.059Z
Learning: In libs/server/kiln_server/document_api.py and the generated app/web_ui/src/lib/api_schema.d.ts, CreateVectorStoreConfigRequest.properties is now a union of three distinct public schemas: LanceDBConfigFTSPropertiesPublic, LanceDBConfigVectorPropertiesPublic, and LanceDBConfigHybridPropertiesPublic. These are intentionally split to allow independent evolution; do not suggest merging them.

Applied to files:

  • libs/core/kiln_ai/adapters/vector_store/vector_store_registry.py
  • libs/core/kiln_ai/adapters/vector_store/lancedb_helpers.py
📚 Learning: 2025-08-08T16:13:26.526Z
Learnt from: leonardmq
Repo: Kiln-AI/Kiln PR: 418
File: libs/core/kiln_ai/adapters/ml_model_list.py:1875-1890
Timestamp: 2025-08-08T16:13:26.526Z
Learning: In libs/core/kiln_ai/adapters/ml_model_list.py (Python), do not blanket-add r1_thinking/optional_r1_thinking parsers for R1-style models. Parser usage is provider-specific and must be based on observed responses in tests. For PR Kiln-AI/Kiln#418, deepseek_r1_0528_distill_qwen3_8b providers were validated without needing a parser.

Applied to files:

  • libs/core/kiln_ai/adapters/chunkers/__init__.py
  • libs/core/kiln_ai/adapters/chunkers/embedding_wrapper.py
📚 Learning: 2025-09-25T06:38:43.145Z
Learnt from: leonardmq
Repo: Kiln-AI/Kiln PR: 654
File: app/web_ui/src/routes/(app)/docs/rag_configs/[project_id]/create_rag_config/options_groups.ts:81-88
Timestamp: 2025-09-25T06:38:43.145Z
Learning: In the Kiln AI system, ChunkerConfig does not have an is_archived field and cannot be archived, unlike ExtractorConfig which does support archiving. Only certain config types in the RAG system support the archiving functionality - chunker configs are not one of them.

Applied to files:

  • libs/core/kiln_ai/adapters/chunkers/chunker_registry.py
📚 Learning: 2025-09-25T06:38:43.145Z
Learnt from: leonardmq
Repo: Kiln-AI/Kiln PR: 654
File: app/web_ui/src/routes/(app)/docs/rag_configs/[project_id]/create_rag_config/options_groups.ts:81-88
Timestamp: 2025-09-25T06:38:43.145Z
Learning: In the Kiln AI system, ChunkerConfig does not have an is_archived field and cannot be archived, unlike ExtractorConfig which does support archiving. Only certain config types in the RAG system support the archiving functionality.

Applied to files:

  • libs/core/kiln_ai/adapters/chunkers/chunker_registry.py
📚 Learning: 2025-08-08T15:50:45.334Z
Learnt from: leonardmq
Repo: Kiln-AI/Kiln PR: 418
File: libs/core/kiln_ai/adapters/ml_model_list.py:221-228
Timestamp: 2025-08-08T15:50:45.334Z
Learning: In libs/core/kiln_ai/adapters/ml_model_list.KilnModelProvider, naming policy: keep cross-provider feature flags unprefixed (e.g., reasoning_optional_for_structured_output) to allow reuse across providers; prefix provider-specific toggles with the provider name (e.g., siliconflow_enable_thinking) when the implementation is specific and potentially unsafe to generalize.

Applied to files:

  • libs/core/kiln_ai/adapters/fine_tune/finetune_registry.py
📚 Learning: 2025-08-19T06:11:15.247Z
Learnt from: leonardmq
Repo: Kiln-AI/Kiln PR: 413
File: app/web_ui/src/routes/(app)/docs/rag_configs/[project_id]/table_rag_config_row.svelte:53-53
Timestamp: 2025-08-19T06:11:15.247Z
Learning: In the Kiln AI RAG system, when extractor_config, chunker_config, or embedding_config are missing from a RagConfigWithSubConfigs object, this represents data corruption that should be allowed to surface as an error rather than being handled defensively with optional chaining. leonardmq prefers to let such corruption bubble up as errors for proper diagnosis rather than silently degrading functionality.

Applied to files:

  • libs/core/kiln_ai/adapters/vector_store/lancedb_adapter.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build Desktop Apps (macos-15-intel)
  • GitHub Check: Build Desktop Apps (macos-latest)
  • GitHub Check: Build Desktop Apps (windows-latest)
🔇 Additional comments (15)
pyproject.toml (1)

13-13: Root workspace dependency and uv workspace members look fine

Adding together>=1.5.0 at the workspace level and updating the uv workspace members line both look syntactically and structurally correct, and they fit the “optional heavy deps live in the core lib/extras, while the workspace can depend on them directly” story. Just double‑check that together is only needed at the workspace/app level and not as a hard dependency of the published kiln-ai core package.

Also applies to: 34-34

libs/core/kiln_ai/adapters/extractors/litellm_extractor.py (2)

170-176: Local import of convert_pdf_to_images is appropriate

Moving convert_pdf_to_images into convert_pdf_page_to_image_input keeps import-time surface smaller and only touches this code path when needed; behavior is unchanged.


243-247: Local import of split_pdf_into_pages matches the lazy-loading pattern

Importing split_pdf_into_pages inside _extract_pdf_page_by_page is consistent with the optional-deps strategy and doesn’t alter the surrounding control flow.

app/desktop/pyproject.toml (1)

11-13: Desktop app explicitly opts into kiln-ai extras and workspace source

Declaring kiln-ai, kiln-ai[google_vertex], and kiln-ai[rag] plus wiring kiln-ai = { workspace = true } gives the desktop app a full-featured kiln-ai install while keeping the core library’s heavy deps optional. The redundancy of listing both kiln-ai and its extras is harmless and makes the intent explicit.

Also applies to: 26-26

libs/core/kiln_ai/adapters/vector_store_loaders/vector_store_loader.py (1)

2-23: Lazy TextNode resolution via optional_import is consistent with the RAG pattern

Using TYPE_CHECKING for TextNode and resolving it at runtime via optional_import("llama_index.core", "rag") keeps static typing while allowing this loader to import even when the RAG extras aren’t installed. Just ensure optional_import for llama_index.core indeed exposes .schema.TextNode as used here (and in the related helpers) so this stays robust across llama-index versions.

libs/core/kiln_ai/adapters/chunkers/fixed_window_chunker.py (1)

1-16: FixedWindowChunker’s TYPE_CHECKING + optional_import setup looks good

Deferring SentenceSplitter to a TYPE_CHECKING import and a runtime optional_import("llama_index.core", "rag") cleanly decouples this chunker from a hard llama-index dependency while preserving the existing chunking behavior.

libs/core/kiln_ai/adapters/vector_store/lancedb_helpers.py (1)

1-19: LanceDB helpers now follow the same lazy-import pattern and have clearer typing

Guarding the llama-index and LanceDB imports with TYPE_CHECKING and optional_import keeps this module importable without the RAG extras, and the explicit LanceDBVectorStore return annotation on lancedb_construct_from_config improves type clarity without changing behavior. Just confirm that optional_import("llama_index.core", "rag") and optional_import("llama_index.vector_stores.lancedb", "rag") resolve to modules exposing the attributes used here under the versions you target.

Also applies to: 35-39

libs/core/kiln_ai/adapters/chunkers/semantic_chunker.py (1)

1-25: SemanticChunker’s llama-index imports are now safely optional

Using TYPE_CHECKING plus optional_import("llama_index.core", "rag") for BaseEmbedding, SemanticSplitterNodeParser, and Document lets this chunker participate in RAG when the extras are installed while keeping the core library importable without them; the chunking logic itself is untouched.

libs/core/kiln_ai/adapters/fine_tune/finetune_registry.py (1)

10-25: Lazy helper for provider → adapter mapping looks good

The branching logic and the local import for VertexFinetune are straightforward and achieve the intended lazy-load behavior without changing the mapping for other providers.

libs/core/kiln_ai/utils/pdf_utils.py (1)

11-22: Optional PDF deps wired through optional_import – verify import‑time behavior is intended

The TYPE_CHECKING/runtime split and use of the "rag" extras group via optional_import for both pypdf and pypdfium2 look consistent with the new optional‑deps strategy.

Note that PdfReader = pypdf.PdfReader and PdfWriter = pypdf.PdfWriter run at module import time, so if optional_import raises when pypdf is missing, importing pdf_utils will still fail immediately (with a nicer error, presumably), rather than only when a PDF helper is first called. If you’d prefer failure to be strictly call‑site based, those assignments would need to move behind a function boundary.

If the goal is just “only RAG users need to install these libs and get a clear error otherwise”, the current approach is fine.

libs/core/pyproject.toml (1)

35-37: Optional‑dependency groups align with lazy import usage

Moving the heavy RAG and Vertex dependencies into project.optional-dependencies (google_vertex, rag) and keeping only the lighter core deps in dependencies lines up well with the lazy/optional import patterns in the code (e.g., optional_import(..., "rag") in pdf_utils.py and similar in the vector/Lance/vertex paths).

Assuming documentation and install instructions are updated to mention kiln-ai[google_vertex] and kiln-ai[rag], this packaging split looks solid.

Also applies to: 49-61

libs/core/kiln_ai/adapters/chunkers/chunker_registry.py (1)

12-15: Local imports for chunkers correctly defer optional deps

Moving FixedWindowChunker and SemanticChunker imports inside their respective match branches achieves the desired lazy‑load behavior without changing the external API or control flow. Given Python’s import caching, the overhead of repeated calls here is negligible compared to chunker construction.

Also applies to: 18-19

libs/core/kiln_ai/adapters/vector_store/vector_store_registry.py (1)

38-49: Lazy LanceDBAdapter import is consistent and safe

Deferring LanceDBAdapter import to the LANCE_DB branch keeps LanceDB (and its transitive deps) out of the import path unless one of the Lance-backed store types is configured, while preserving the existing adapter creation and caching behavior.

libs/core/kiln_ai/adapters/chunkers/embedding_wrapper.py (1)

7-11: LGTM!

The lazy import pattern for BaseEmbedding is correctly implemented. The TYPE_CHECKING guard ensures proper static typing while deferring the actual import until runtime when the class is instantiated.

libs/core/kiln_ai/adapters/chunkers/__init__.py (1)

7-13: LGTM! Public API surface appropriately narrowed.

The removal of fixed_window_chunker and semantic_chunker from the public exports aligns with the PR objective of making optional dependencies lazy-loaded. The chunker registry now handles lazy imports of these chunkers at runtime, avoiding eager dependency loading.

Comment on lines +28 to +51
# For backward compatibility, create a module-level registry that works lazily
class _FinetuneRegistry:
"""Lazy registry that only imports optional adapters when accessed."""

def __getitem__(self, key: ModelProviderName) -> Type[BaseFinetuneAdapter]:
if key == ModelProviderName.openai:
return OpenAIFinetune
elif key == ModelProviderName.fireworks_ai:
return FireworksFinetune
elif key == ModelProviderName.together_ai:
return TogetherFinetune
elif key == ModelProviderName.vertex:
return get_finetune_adapter_class(ModelProviderName.vertex)
else:
raise ValueError(f"Unsupported provider: {key}")

def get(self, key: ModelProviderName, default=None):
try:
return self[key]
except (ValueError, KeyError):
return default


finetune_registry = _FinetuneRegistry()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

_FinetuneRegistry should mimic dict semantics more closely

Right now __getitem__ raises ValueError instead of KeyError, which diverges from the old dict-based registry and from the intent in the comment (“For backward compatibility”). Any code that previously relied on KeyError (or on treating unsupported providers as missing keys) might behave differently.

You can both centralize the logic and restore dict-like behavior by delegating to get_finetune_adapter_class and translating unsupported providers into KeyError:

 class _FinetuneRegistry:
     """Lazy registry that only imports optional adapters when accessed."""
 
-    def __getitem__(self, key: ModelProviderName) -> Type[BaseFinetuneAdapter]:
-        if key == ModelProviderName.openai:
-            return OpenAIFinetune
-        elif key == ModelProviderName.fireworks_ai:
-            return FireworksFinetune
-        elif key == ModelProviderName.together_ai:
-            return TogetherFinetune
-        elif key == ModelProviderName.vertex:
-            return get_finetune_adapter_class(ModelProviderName.vertex)
-        else:
-            raise ValueError(f"Unsupported provider: {key}")
+    def __getitem__(self, key: ModelProviderName) -> Type[BaseFinetuneAdapter]:
+        try:
+            return get_finetune_adapter_class(key)
+        except ValueError as exc:
+            raise KeyError(key) from exc
 
     def get(self, key: ModelProviderName, default=None):
-        try:
-            return self[key]
-        except (ValueError, KeyError):
-            return default
+        try:
+            return self[key]
+        except KeyError:
+            return default

This keeps the lazy behavior while aligning closer to the original dict API and avoiding duplication of the mapping logic.

🤖 Prompt for AI Agents
In libs/core/kiln_ai/adapters/fine_tune/finetune_registry.py around lines 28 to
51, __getitem__ currently raises ValueError and duplicates mapping logic; change
it to delegate to get_finetune_adapter_class for all providers, and if that
function indicates unsupported provider, raise KeyError to match dict semantics;
also update get to rely on the new __getitem__ behavior so unsupported providers
return the default rather than triggering a ValueError.

Comment on lines +37 to +44
llama_index = optional_import("llama_index.core", "rag")
StorageContext = llama_index.StorageContext
VectorStoreIndex = llama_index.VectorStoreIndex
BaseNode = llama_index.schema.BaseNode
TextNode = llama_index.schema.TextNode
VectorStoreQuery = llama_index.core.vector_stores.types.VectorStoreQuery
LlamaIndexVectorStoreQuery = VectorStoreQuery
VectorStoreQueryResult = llama_index.core.vector_stores.types.VectorStoreQueryResult
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix name collision and incorrect attribute paths.

This code has two critical issues:

  1. Name collision: Line 42 shadows the VectorStoreQuery imported from kiln_ai.adapters.vector_store.base_vector_store_adapter at line 11, making the Kiln AI type inaccessible in the runtime path.

  2. Incorrect attribute paths: Lines 42 and 44 use llama_index.core.vector_stores.types.* but since llama_index was already assigned llama_index.core on line 37, this attempts to access llama_index.core.core.vector_stores.types.* which doesn't exist.

Apply this diff to fix both issues:

 else:
     llama_index = optional_import("llama_index.core", "rag")
     StorageContext = llama_index.StorageContext
     VectorStoreIndex = llama_index.VectorStoreIndex
     BaseNode = llama_index.schema.BaseNode
     TextNode = llama_index.schema.TextNode
-    VectorStoreQuery = llama_index.core.vector_stores.types.VectorStoreQuery
-    LlamaIndexVectorStoreQuery = VectorStoreQuery
-    VectorStoreQueryResult = llama_index.core.vector_stores.types.VectorStoreQueryResult
+    LlamaIndexVectorStoreQuery = llama_index.vector_stores.types.VectorStoreQuery
+    VectorStoreQueryResult = llama_index.vector_stores.types.VectorStoreQueryResult
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
llama_index = optional_import("llama_index.core", "rag")
StorageContext = llama_index.StorageContext
VectorStoreIndex = llama_index.VectorStoreIndex
BaseNode = llama_index.schema.BaseNode
TextNode = llama_index.schema.TextNode
VectorStoreQuery = llama_index.core.vector_stores.types.VectorStoreQuery
LlamaIndexVectorStoreQuery = VectorStoreQuery
VectorStoreQueryResult = llama_index.core.vector_stores.types.VectorStoreQueryResult
llama_index = optional_import("llama_index.core", "rag")
StorageContext = llama_index.StorageContext
VectorStoreIndex = llama_index.VectorStoreIndex
BaseNode = llama_index.schema.BaseNode
TextNode = llama_index.schema.TextNode
LlamaIndexVectorStoreQuery = llama_index.vector_stores.types.VectorStoreQuery
VectorStoreQueryResult = llama_index.vector_stores.types.VectorStoreQueryResult

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants