Skip to content

Add NVIDIA Integration for Local LLM Support#144

Open
Vikranth3140 wants to merge 6 commits intodzhng:mainfrom
Finance-LLMs:main
Open

Add NVIDIA Integration for Local LLM Support#144
Vikranth3140 wants to merge 6 commits intodzhng:mainfrom
Finance-LLMs:main

Conversation

@Vikranth3140
Copy link
Copy Markdown

This PR closes #143 by adding support for NVIDIA GPU acceleration in local LLM inference, allowing users with NVIDIA hardware to leverage CUDA for faster processing when using local models.

The integration enables optional local LLM usage via environment variables. It implicitly utilizes NVIDIA GPUs if the local server (e.g., Ollama, LM Studio) is configured with CUDA support. No new dependencies are added, keeping the codebase lightweight.

Please review and merge if everything looks good! Let me know if any adjustments are needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add NVIDIA Integration for Local LLM Support

1 participant