-
Notifications
You must be signed in to change notification settings - Fork 1
Added support for gemini #25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Introduce a new "gemini" provider listed by SupportedProviders and
resolvable by the llm dispatcher. A placeholder implementation
returns a clear 'not implemented' error for every LLM operation but
already maps ModelFamily {gpt, reasoning} to the concrete Gemini model
identifiers requested:
* small → gemini-2.5-flash-preview-05-20
* large → gemini-2.5-pro-preview-06-05
The provider looks for the GEMINI_API_KEY environment variable when it
is eventually implemented. Current functionality is unaffected since
OpenAI remains the default backend.
Adds llm/dispatcher_test.go with unit tests covering: • Correct identifier resolution for every (family,size) pair. • Error returned when an unsupported size is requested. This guards future refactors of mapGeminiModel.
Add llm/internal/gemini package containing gemini.go with exported helpers mirroring the OpenAI interface. Each function currently returns ErrNotImplemented, allowing the rest of the codebase to compile while future tasks wire real HTTP logic. Exposed helpers: • GetWorkspaceChangeProposals • GetModuleContext • GetModuleExternalContexts An exported ErrNotImplemented sentinel error is provided for callers to check against.
Implemented core structures for Gemini integration: • Added message, part, content, generationConfig, request and response types. • Introduced helper buildRequest to compose a valid request payload including system & user messages and structured output schema. • Declared constants for base endpoint and generateContent path (without network call yet). • Kept existing ErrNotImplemented behaviour for public helpers.
Add callGemini helper to llm/internal/gemini that executes a POST request to the Google Generative Language API. The function builds the endpoint URL using the GEMINI_API_KEY environment variable, marshals the request payload produced by buildRequest, and unmarshals the JSON response into a geminiResponse value. Basic validations are included: • missing GEMINI_API_KEY • network / marshal errors • non-200 status codes return an informative error containing the body This paves the way for higher-level helpers to consume real Gemini responses in subsequent tasks.
Implemented full logic to request WorkspaceChangeProposal from Gemini: • Added local model mapper inside gemini package. • Build request via existing helpers and invoke callGemini. • Parse candidate text into payload.WorkspaceChangeProposal. • Return descriptive errors on malformed responses. Updated dispatcher geminiProvider to delegate to new helper. Added httptest-based unit test verifying request construction, env var validation and correct unmarshalling.
Added full support for `llm/internal/gemini` to retrieve module summaries and external contexts: • Implemented `GetModuleContext` and `GetModuleExternalContexts`, using the small Gemini model by default. • Updated `geminiProvider` methods in `llm/dispatcher.go` to delegate to these helpers. • Extended unit-tests with coverage for the two new helpers, asserting JSON unmarshalling and correct endpoint/model selection.
Adds debug logging to llm/internal/gemini.callGemini mirroring the behaviour of the OpenAI provider. Every request/response pair is now persisted under the OS temp directory using the pattern "vyb-gemini-*.json" for easier troubleshooting.
… includes gemini providers_test.go – asserts that llm.SupportedProviders() contains "gemini". These tests fulfil the remaining checklist items for environment validation and provider list coverage, finishing Gemini provider integration.
Update README.md and llm/README.md to include instructions and documentation for the newly added Gemini provider. This includes: - API key environment variable (GEMINI_API_KEY). - Provider configuration in .vyb/config.yaml. - Model name resolution for Gemini models. - Updated architecture overview. - Documentation for the llm/internal/gemini package.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Now gemini is one of the options during
vyb init. Implements #24