You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+8-2Lines changed: 8 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -118,14 +118,20 @@ export VERTEXAI_LOCATION="REGION" # for Aider/LiteLLM call
118
118
export VERTEXAI_PROJECT="PROJECT_ID"# for Aider/LiteLLM call
119
119
```
120
120
121
-
#### DeepSeek API (DeepSeek-Coder-V2)
122
-
121
+
#### DeepSeek API (deepseek-chat, deepseek-reasoner)
123
122
By default, this uses the `DEEPSEEK_API_KEY` environment variable.
124
123
125
124
#### OpenRouter API (Llama3.1)
126
125
127
126
By default, this uses the `OPENROUTER_API_KEY` environment variable.
128
127
128
+
#### Google Gemini
129
+
We support Google Gemini models (e.g., "gemini-1.5-flash", "gemini-1.5-pro") via the [google-generativeai](https://pypi.org/project/google-generativeai) Python library. By default, it uses the environment variable:
130
+
131
+
```bash
132
+
export GEMINI_API_KEY="YOUR GEMINI API KEY"
133
+
```
134
+
129
135
#### Semantic Scholar API (Literature Search)
130
136
131
137
Our code can also optionally use a Semantic Scholar API Key (`S2_API_KEY`) for higher throughput [if you have one](https://www.semanticscholar.org/product/api), though it should work without it in principle. If you have problems with Semantic Scholar, you can skip the literature search and citation phases of paper generation.
0 commit comments