Skip to content

Commit 83799be

Browse files
committed
feat: implement classes for proper typing and dot notation in responses
chore: refactor dataclasses into new file chore: update Readme chore: bump to 0.0.8
1 parent da2a43e commit 83799be

File tree

5 files changed

+508
-109
lines changed

5 files changed

+508
-109
lines changed

README.md

Lines changed: 238 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,9 @@ response = xai.invoke(
4545
]
4646
)
4747

48-
print(response["message"])
48+
response_message = response.choices[0].message
49+
print(response_message)
50+
# Response: Message(role='assistant', content="Hello! I can help you with a wide range of tasks and questions. Whether you need assistance with information, problem-solving, learning something new, or just want to have a conversation, I'm here to help. What specifically would you like assistance with today?", tool_calls=None, tool_results=None, refusal=None)
4951
```
5052

5153
## Parameters
@@ -109,27 +111,33 @@ tools = [
109111
}
110112
]
111113

112-
# Implement the function
114+
# Implement the tool function
113115
def get_weather(location: str) -> str:
114116
return f"The weather in {location} is sunny."
115117

116118
# Initialize the client with tools and function implementations
117-
xai = XAI(
118-
api_key="your_api_key",
119+
llm = XAI(
120+
api_key=api_key,
119121
model="grok-2-1212",
120122
tools=tools,
121-
function_map={"get_weather": get_weather} # Map function names to implementations
123+
function_map={"get_weather": get_weather}
122124
)
123125

124-
# Make a request that might trigger function calling
125-
response = xai.invoke(
126+
# Make a request that will use function calling
127+
response = llm.invoke(
126128
messages=[
127-
{"role": "user", "content": "What's the weather like in San Francisco?"}
129+
{"role": "user", "content": "What is the weather in San Francisco?"},
128130
],
129-
tool_choice="auto" # Can be 'auto', 'required', 'none', or a specific function
131+
tool_choice="auto" # Can be "auto", "required", or "none"
130132
)
131133

132-
print(response["message"])
134+
response_message = response.choices[0].message
135+
print(response_message)
136+
# Response: Message(role='assistant', content='I am retrieving the weather for San Francisco.', tool_calls=[{'id': '0', 'function': {'name': 'get_weather', 'arguments': '{"location":"San Francisco, CA"}'}, 'type': 'function'}], tool_results=[{'tool_call_id': '0', 'role': 'tool', 'name': 'get_weather', 'content': 'The weather in San Francisco, CA is sunny.'}], refusal=None)
137+
138+
tool_result_content = response_message.tool_results[0].content
139+
print(tool_result_content)
140+
# Response: The weather in San Francisco, CA is sunny.
133141
```
134142

135143
The SDK supports various function calling modes through the `tool_choice` parameter:
@@ -144,37 +152,242 @@ The SDK supports various function calling modes through the `tool_choice` parame
144152

145153
For more details, see the [xAI Function Calling Guide](https://docs.x.ai/docs/guides/function-calling) and the [API Reference](https://docs.x.ai/api/endpoints#chat-completions).
146154

147-
> **Note**: Currently, using `"required"` or specific function calls may produce unexpected outputs. It is recommended to use either `"auto"` or `"none"` for more reliable results.
148-
149155
### Required Parameters for Function Calling
150156

151157
When using function calling, you need to provide:
152158

153159
- `tools`: List of tool definitions with their schemas
154-
- `function_map`: Dictionary mapping function names to their actual implementations
155160

156-
> **Note**: The `function_map` parameter is required when tools are provided. Each tool name must have a corresponding implementation in the function map.
161+
### Function Map
162+
163+
The `function_map` optional parameter maps tool names to their Python implementations. This allows you to actually invoke the function and append its result to the response. Each function in the map must:
164+
165+
- Have a name matching a tool definition
166+
- Accept the parameters specified in the tool's JSON Schema
167+
- Return a value that can be converted to a string
168+
169+
> **Note**: The `function_map` parameter is not required when tools are provided. However, when omitted, only the tool call with the parameters used by the model will be included in the response.
157170
158171
## API Reference
159172

160173
### XAI Class
161174

162-
#### `__init__(api_key: str, model: str, tools: Optional[List[Dict[str, Any]]] = None, function_map: Optional[Dict[str, Any]] = None)`
175+
The main class for interacting with the xAI API.
176+
177+
#### Constructor Parameters
178+
179+
- `api_key` (str, required): Your xAI API key
180+
- `model` (ModelType, required): Model to use ("grok-2-1212" or "grok-beta")
181+
- `tools` (List[Dict[str, Any]], optional): List of available tools
182+
- `function_map` (Dict[str, Callable], optional): Map of function names to implementations
183+
184+
#### Methods
185+
186+
##### invoke
187+
188+
Makes a chat completion request to the xAI API.
189+
190+
```python
191+
def invoke(
192+
messages: List[Dict[str, Any]], # REQUIRED
193+
frequency_penalty: Optional[float] = None, # Range: -2.0 to 2.0
194+
logit_bias: Optional[Dict[str, float]] = None,
195+
logprobs: Optional[bool] = None,
196+
max_tokens: Optional[int] = None,
197+
n: Optional[int] = None,
198+
presence_penalty: Optional[float] = None, # Range: -2.0 to 2.0
199+
response_format: Optional[Any] = None,
200+
seed: Optional[int] = None,
201+
stop: Optional[List[str]] = None,
202+
stream: Optional[bool] = None,
203+
stream_options: Optional[Any] = None,
204+
temperature: Optional[float] = None, # Range: 0 to 2
205+
tool_choice: Optional[Union[str, Dict[str, Any]]] = None,
206+
top_logprobs: Optional[int] = None, # Range: 0 to 20
207+
top_p: Optional[float] = None, # Range: 0 to 1
208+
user: Optional[str] = None
209+
) -> ChatCompletionResponse
210+
```
211+
212+
### Response Models
213+
214+
The SDK uses several dataclasses to represent the API response structure:
215+
216+
#### Message
217+
218+
Represents a message in the chat completion response.
219+
220+
```python
221+
@dataclass
222+
class Message:
223+
role: str # Role of the message sender (e.g., "assistant", "user")
224+
content: str # Content of the message
225+
tool_calls: Optional[List[ToolCall]] = None # List of tool calls made by the model
226+
tool_results: Optional[List[ToolResult]] = None # Results from tool executions
227+
refusal: Optional[Any] = None # Information about message refusal if applicable
228+
```
229+
230+
#### ToolCall
231+
232+
Represents a tool call in the chat completion response.
233+
234+
```python
235+
@dataclass
236+
class ToolCall:
237+
id: str # Unique identifier for the tool call
238+
function: Function # Function details
239+
type: str # Type of the tool call
240+
```
241+
242+
#### Function
243+
244+
Represents a function in a tool call.
245+
246+
```python
247+
@dataclass
248+
class Function:
249+
name: str # Name of the function
250+
arguments: Dict[str, Any] # Arguments passed to the function
251+
```
252+
253+
#### ToolResult
254+
255+
Represents a tool result in the chat completion response.
256+
257+
```python
258+
@dataclass
259+
class ToolResult:
260+
tool_call_id: str # ID of the associated tool call
261+
role: str # Role (typically "tool")
262+
name: str # Name of the tool
263+
content: Any # Result content from the tool execution
264+
```
265+
266+
#### ChatCompletionResponse
267+
268+
The main response object returned by the `invoke` method.
269+
270+
```python
271+
@dataclass
272+
class ChatCompletionResponse:
273+
id: str # Unique identifier for the completion
274+
choices: List[Choice] # List of completion choices
275+
created: int # Unix timestamp of creation
276+
model: str # Model used for completion
277+
object: str # Object type ("chat.completion")
278+
system_fingerprint: str # System fingerprint
279+
usage: Optional[Usage] # Token usage statistics
280+
```
281+
282+
#### Choice
283+
284+
Represents a single completion choice in the response.
285+
286+
```python
287+
@dataclass
288+
class Choice:
289+
index: int # Index of this choice
290+
message: Message # The generated message
291+
finish_reason: Optional[str] # Why the model stopped generating
292+
logprobs: Optional[Dict[str, Any]] # Log probabilities if requested
293+
```
294+
295+
#### Usage
296+
297+
Contains token usage statistics for the request.
298+
299+
```python
300+
@dataclass
301+
class Usage:
302+
prompt_tokens: int # Tokens in the prompt
303+
completion_tokens: int # Tokens in the completion
304+
total_tokens: int # Total tokens used
305+
prompt_tokens_details: Optional[Dict[str, Any]] # Detailed token usage
306+
```
307+
308+
### Example Response
309+
310+
Here's an example of a typical response when using function calling:
311+
312+
```python
313+
ChatCompletionResponse(
314+
id='...',
315+
choices=[
316+
Choice(
317+
index=0,
318+
message=Message(
319+
role='assistant',
320+
content='I am retrieving the weather for San Francisco.',
321+
tool_calls=[{
322+
'id': '0',
323+
'function': {
324+
'name': 'get_weather',
325+
'arguments': '{"location":"San Francisco, CA"}'
326+
},
327+
'type': 'function'
328+
}],
329+
tool_results=[{
330+
'tool_call_id': '0',
331+
'role': 'tool',
332+
'name': 'get_weather',
333+
'content': 'The weather in San Francisco, CA is sunny.'
334+
}],
335+
refusal=None
336+
),
337+
finish_reason='stop',
338+
logprobs=None
339+
)
340+
],
341+
created=1703...,
342+
model='grok-2-1212',
343+
object='chat.completion',
344+
system_fingerprint='...',
345+
usage=Usage(
346+
prompt_tokens=50,
347+
completion_tokens=20,
348+
total_tokens=70,
349+
prompt_tokens_details=None
350+
)
351+
)
352+
```
353+
354+
## Security Best Practices
355+
356+
When using this SDK, follow these security best practices:
357+
358+
1. **API Key Management**
359+
360+
- Never hardcode your API key directly in your code
361+
- Use environment variables to store your API key:
362+
363+
```python
364+
import os
163365

164-
Initialize the XAI client.
366+
api_key = os.getenv("XAI_API_KEY")
367+
xai = XAI(api_key=api_key, model="grok-2-1212")
368+
```
165369

166-
- `api_key`: Your xAI API key
167-
- `model`: The model to use for chat completions
168-
- `tools`: Optional list of tools available for the model to use
169-
- `function_map`: Optional dictionary mapping function names to their implementations
370+
- Consider using a secure secrets management service in production
371+
- Keep your API key private and never commit it to version control
170372

171-
#### `invoke(messages: List[Dict[str, Any]], tool_choice: str = "auto") -> Dict[str, Any]`
373+
2. **Environment Variables**
172374

173-
Run a conversation with the model.
375+
- Create a `.env` file for local development (and add it to `.gitignore`)
376+
- Example `.env` file:
377+
```
378+
XAI_API_KEY=your_api_key_here
379+
```
174380

175-
- `messages`: List of conversation messages
176-
- `tool_choice`: Function calling mode ('auto', 'required', 'none', or specific function)
177-
- Returns: Dictionary containing the model's response
381+
3. **Request Validation**
382+
- The SDK automatically validates all parameters before making API calls
383+
- Always handle potential exceptions in your code:
384+
```python
385+
try:
386+
response = xai.invoke(messages=[{"role": "user", "content": "Hello"}])
387+
except Exception as e:
388+
# Handle the error appropriately
389+
print(f"An error occurred: {e}")
390+
```
178391

179392
## License
180393

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
44

55
[project]
66
name = "xai-grok-sdk"
7-
version = "0.0.7"
7+
version = "0.0.8"
88
description = "Lightweight xAI SDK with minimal dependencies"
99
dependencies = [
1010
"requests>=2.32.3",

src/xai_grok_sdk/__init__.py

Lines changed: 23 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,25 @@
11
from xai_grok_sdk.xai import XAI
2+
from xai_grok_sdk.models import (
3+
ModelType,
4+
ChatCompletionRequest,
5+
Message,
6+
Usage,
7+
Choice,
8+
ChatCompletionResponse,
9+
Function,
10+
ToolCall,
11+
ToolResult,
12+
)
213

3-
__all__ = ["XAI"]
14+
__all__ = [
15+
"XAI",
16+
"ModelType",
17+
"ChatCompletionRequest",
18+
"Message",
19+
"Usage",
20+
"Choice",
21+
"ChatCompletionResponse",
22+
"Function",
23+
"ToolCall",
24+
"ToolResult",
25+
]

0 commit comments

Comments
 (0)